2026-01-03 00:00:06.667437 | Job console starting 2026-01-03 00:00:06.736255 | Updating git repos 2026-01-03 00:00:07.091127 | Cloning repos into workspace 2026-01-03 00:00:07.363466 | Restoring repo states 2026-01-03 00:00:07.394378 | Merging changes 2026-01-03 00:00:07.394398 | Checking out repos 2026-01-03 00:00:07.978045 | Preparing playbooks 2026-01-03 00:00:09.421934 | Running Ansible setup 2026-01-03 00:00:19.944869 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-03 00:00:22.490305 | 2026-01-03 00:00:22.491468 | PLAY [Base pre] 2026-01-03 00:00:22.628318 | 2026-01-03 00:00:22.632327 | TASK [Setup log path fact] 2026-01-03 00:00:22.775895 | orchestrator | ok 2026-01-03 00:00:22.813113 | 2026-01-03 00:00:22.813312 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-03 00:00:22.893537 | orchestrator | ok 2026-01-03 00:00:22.918786 | 2026-01-03 00:00:22.955329 | TASK [emit-job-header : Print job information] 2026-01-03 00:00:23.094822 | # Job Information 2026-01-03 00:00:23.095031 | Ansible Version: 2.16.14 2026-01-03 00:00:23.095066 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-03 00:00:23.095100 | Pipeline: periodic-midnight 2026-01-03 00:00:23.095123 | Executor: 521e9411259a 2026-01-03 00:00:23.095143 | Triggered by: https://github.com/osism/testbed 2026-01-03 00:00:23.095200 | Event ID: 088a682cd46040ec8feff3feacdb5d3f 2026-01-03 00:00:23.102676 | 2026-01-03 00:00:23.102817 | LOOP [emit-job-header : Print node information] 2026-01-03 00:00:24.078764 | orchestrator | ok: 2026-01-03 00:00:24.082929 | orchestrator | # Node Information 2026-01-03 00:00:24.083042 | orchestrator | Inventory Hostname: orchestrator 2026-01-03 00:00:24.083073 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-03 00:00:24.083096 | orchestrator | Username: zuul-testbed06 2026-01-03 00:00:24.083117 | orchestrator | Distro: Debian 12.12 2026-01-03 00:00:24.083142 | orchestrator | Provider: static-testbed 2026-01-03 00:00:24.083239 | orchestrator | Region: 2026-01-03 00:00:24.083265 | orchestrator | Label: testbed-orchestrator 2026-01-03 00:00:24.083285 | orchestrator | Product Name: OpenStack Nova 2026-01-03 00:00:24.083306 | orchestrator | Interface IP: 81.163.193.140 2026-01-03 00:00:24.128304 | 2026-01-03 00:00:24.128476 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-03 00:00:27.721733 | orchestrator -> localhost | changed 2026-01-03 00:00:27.730372 | 2026-01-03 00:00:27.730529 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-03 00:00:34.295654 | orchestrator -> localhost | changed 2026-01-03 00:00:34.385886 | 2026-01-03 00:00:34.387782 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-03 00:00:36.640777 | orchestrator -> localhost | ok 2026-01-03 00:00:36.652769 | 2026-01-03 00:00:36.652926 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-03 00:00:36.784998 | orchestrator | ok 2026-01-03 00:00:36.867084 | orchestrator | included: /var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-03 00:00:36.918960 | 2026-01-03 00:00:36.919117 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-03 00:00:44.428292 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-03 00:00:44.428466 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/7cea1800cefa4460941b05e9a4d84b02_id_rsa 2026-01-03 00:00:44.428499 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/7cea1800cefa4460941b05e9a4d84b02_id_rsa.pub 2026-01-03 00:00:44.428522 | orchestrator -> localhost | The key fingerprint is: 2026-01-03 00:00:44.428805 | orchestrator -> localhost | SHA256:pLGYDa9pOLp5gCec8wf42KcuqWZnIEjqX/Qga034ObA zuul-build-sshkey 2026-01-03 00:00:44.428830 | orchestrator -> localhost | The key's randomart image is: 2026-01-03 00:00:44.428861 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-03 00:00:44.428881 | orchestrator -> localhost | | | 2026-01-03 00:00:44.428900 | orchestrator -> localhost | | | 2026-01-03 00:00:44.428918 | orchestrator -> localhost | | . . . | 2026-01-03 00:00:44.428936 | orchestrator -> localhost | | . .* = | 2026-01-03 00:00:44.428953 | orchestrator -> localhost | |* o+o+= S | 2026-01-03 00:00:44.428974 | orchestrator -> localhost | |B*.oXo+ | 2026-01-03 00:00:44.428992 | orchestrator -> localhost | |o+@E=* . | 2026-01-03 00:00:44.429009 | orchestrator -> localhost | | XoO.o. | 2026-01-03 00:00:44.429026 | orchestrator -> localhost | |Oo*++ | 2026-01-03 00:00:44.429043 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-03 00:00:44.429101 | orchestrator -> localhost | ok: Runtime: 0:00:04.753369 2026-01-03 00:00:44.436237 | 2026-01-03 00:00:44.436387 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-03 00:00:44.517189 | orchestrator | ok 2026-01-03 00:00:44.550943 | orchestrator | included: /var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-03 00:00:44.576269 | 2026-01-03 00:00:44.576363 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-03 00:00:44.625856 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:44.632978 | 2026-01-03 00:00:44.633074 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-03 00:00:45.884641 | orchestrator | changed 2026-01-03 00:00:45.894969 | 2026-01-03 00:00:45.895083 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-03 00:00:46.202815 | orchestrator | ok 2026-01-03 00:00:46.207900 | 2026-01-03 00:00:46.207983 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-03 00:00:46.779041 | orchestrator | ok 2026-01-03 00:00:46.788096 | 2026-01-03 00:00:46.788216 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-03 00:00:47.770028 | orchestrator | ok 2026-01-03 00:00:47.775215 | 2026-01-03 00:00:47.775298 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-03 00:00:47.835080 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:47.841183 | 2026-01-03 00:00:47.841280 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-03 00:00:49.537796 | orchestrator -> localhost | changed 2026-01-03 00:00:49.555907 | 2026-01-03 00:00:49.556003 | TASK [add-build-sshkey : Add back temp key] 2026-01-03 00:00:50.354260 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/7cea1800cefa4460941b05e9a4d84b02_id_rsa (zuul-build-sshkey) 2026-01-03 00:00:50.354432 | orchestrator -> localhost | ok: Runtime: 0:00:00.025762 2026-01-03 00:00:50.360444 | 2026-01-03 00:00:50.360530 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-03 00:00:51.390565 | orchestrator | ok 2026-01-03 00:00:51.409523 | 2026-01-03 00:00:51.409631 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-03 00:00:51.479104 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:51.605358 | 2026-01-03 00:00:51.605472 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-03 00:00:52.425514 | orchestrator | ok 2026-01-03 00:00:52.449442 | 2026-01-03 00:00:52.449564 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-03 00:00:52.508973 | orchestrator | ok 2026-01-03 00:00:52.524178 | 2026-01-03 00:00:52.524293 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-03 00:00:53.623517 | orchestrator -> localhost | ok 2026-01-03 00:00:53.642224 | 2026-01-03 00:00:53.642337 | TASK [validate-host : Collect information about the host] 2026-01-03 00:00:55.141856 | orchestrator | ok 2026-01-03 00:00:55.191424 | 2026-01-03 00:00:55.191544 | TASK [validate-host : Sanitize hostname] 2026-01-03 00:00:55.377359 | orchestrator | ok 2026-01-03 00:00:55.383254 | 2026-01-03 00:00:55.383370 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-03 00:00:57.866747 | orchestrator -> localhost | changed 2026-01-03 00:00:57.872946 | 2026-01-03 00:00:57.873044 | TASK [validate-host : Collect information about zuul worker] 2026-01-03 00:00:58.951376 | orchestrator | ok 2026-01-03 00:00:58.960659 | 2026-01-03 00:00:58.960768 | TASK [validate-host : Write out all zuul information for each host] 2026-01-03 00:01:00.927809 | orchestrator -> localhost | changed 2026-01-03 00:01:00.936555 | 2026-01-03 00:01:00.936657 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-03 00:01:01.316573 | orchestrator | ok 2026-01-03 00:01:01.321981 | 2026-01-03 00:01:01.322070 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-03 00:02:24.563973 | orchestrator | changed: 2026-01-03 00:02:24.564486 | orchestrator | .d..t...... src/ 2026-01-03 00:02:24.564578 | orchestrator | .d..t...... src/github.com/ 2026-01-03 00:02:24.564642 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-03 00:02:24.564700 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-03 00:02:24.564751 | orchestrator | RedHat.yml 2026-01-03 00:02:24.591525 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-03 00:02:24.591549 | orchestrator | RedHat.yml 2026-01-03 00:02:24.591619 | orchestrator | = 2.2.0"... 2026-01-03 00:02:36.665231 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-03 00:02:36.681458 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-03 00:02:36.822916 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-03 00:02:37.604367 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-03 00:02:37.666292 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-03 00:02:38.241503 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-03 00:02:38.306823 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-03 00:02:39.013252 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-03 00:02:39.013344 | orchestrator | 2026-01-03 00:02:39.013355 | orchestrator | Providers are signed by their developers. 2026-01-03 00:02:39.013363 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-03 00:02:39.013370 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-03 00:02:39.013390 | orchestrator | 2026-01-03 00:02:39.013398 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-03 00:02:39.013405 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-03 00:02:39.013433 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-03 00:02:39.013440 | orchestrator | you run "tofu init" in the future. 2026-01-03 00:02:39.013674 | orchestrator | 2026-01-03 00:02:39.013699 | orchestrator | OpenTofu has been successfully initialized! 2026-01-03 00:02:39.013706 | orchestrator | 2026-01-03 00:02:39.013713 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-03 00:02:39.013727 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-03 00:02:39.013734 | orchestrator | should now work. 2026-01-03 00:02:39.013753 | orchestrator | 2026-01-03 00:02:39.013759 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-03 00:02:39.013766 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-03 00:02:39.013773 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-03 00:02:39.169178 | orchestrator | Created and switched to workspace "ci"! 2026-01-03 00:02:39.169344 | orchestrator | 2026-01-03 00:02:39.169362 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-03 00:02:39.169377 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-03 00:02:39.169388 | orchestrator | for this configuration. 2026-01-03 00:02:39.315362 | orchestrator | ci.auto.tfvars 2026-01-03 00:02:39.703750 | orchestrator | default_custom.tf 2026-01-03 00:02:40.666172 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-03 00:02:41.210662 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-03 00:02:41.455790 | orchestrator | 2026-01-03 00:02:41.455899 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-03 00:02:41.455909 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-03 00:02:41.455932 | orchestrator | + create 2026-01-03 00:02:41.455938 | orchestrator | <= read (data resources) 2026-01-03 00:02:41.455943 | orchestrator | 2026-01-03 00:02:41.455948 | orchestrator | OpenTofu will perform the following actions: 2026-01-03 00:02:41.455952 | orchestrator | 2026-01-03 00:02:41.455956 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-03 00:02:41.455961 | orchestrator | # (config refers to values not yet known) 2026-01-03 00:02:41.455965 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-03 00:02:41.455969 | orchestrator | + checksum = (known after apply) 2026-01-03 00:02:41.455974 | orchestrator | + created_at = (known after apply) 2026-01-03 00:02:41.455978 | orchestrator | + file = (known after apply) 2026-01-03 00:02:41.455981 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456006 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456011 | orchestrator | + min_disk_gb = (known after apply) 2026-01-03 00:02:41.456015 | orchestrator | + min_ram_mb = (known after apply) 2026-01-03 00:02:41.456019 | orchestrator | + most_recent = true 2026-01-03 00:02:41.456023 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.456027 | orchestrator | + protected = (known after apply) 2026-01-03 00:02:41.456031 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456039 | orchestrator | + schema = (known after apply) 2026-01-03 00:02:41.456043 | orchestrator | + size_bytes = (known after apply) 2026-01-03 00:02:41.456047 | orchestrator | + tags = (known after apply) 2026-01-03 00:02:41.456051 | orchestrator | + updated_at = (known after apply) 2026-01-03 00:02:41.456055 | orchestrator | } 2026-01-03 00:02:41.456059 | orchestrator | 2026-01-03 00:02:41.456063 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-03 00:02:41.456067 | orchestrator | # (config refers to values not yet known) 2026-01-03 00:02:41.456071 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-03 00:02:41.456075 | orchestrator | + checksum = (known after apply) 2026-01-03 00:02:41.456079 | orchestrator | + created_at = (known after apply) 2026-01-03 00:02:41.456083 | orchestrator | + file = (known after apply) 2026-01-03 00:02:41.456087 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456090 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456094 | orchestrator | + min_disk_gb = (known after apply) 2026-01-03 00:02:41.456098 | orchestrator | + min_ram_mb = (known after apply) 2026-01-03 00:02:41.456102 | orchestrator | + most_recent = true 2026-01-03 00:02:41.456105 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.456109 | orchestrator | + protected = (known after apply) 2026-01-03 00:02:41.456113 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456116 | orchestrator | + schema = (known after apply) 2026-01-03 00:02:41.456120 | orchestrator | + size_bytes = (known after apply) 2026-01-03 00:02:41.456124 | orchestrator | + tags = (known after apply) 2026-01-03 00:02:41.456128 | orchestrator | + updated_at = (known after apply) 2026-01-03 00:02:41.456131 | orchestrator | } 2026-01-03 00:02:41.456137 | orchestrator | 2026-01-03 00:02:41.456141 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-03 00:02:41.456145 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-03 00:02:41.456149 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.456153 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.456157 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.456160 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.456164 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.456168 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.456171 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.456175 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.456179 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.456183 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-03 00:02:41.456187 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456190 | orchestrator | } 2026-01-03 00:02:41.456194 | orchestrator | 2026-01-03 00:02:41.456198 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-03 00:02:41.456202 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-03 00:02:41.456205 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.456209 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.456213 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.456216 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.456220 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.456224 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.456228 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.456231 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.456235 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.456243 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-03 00:02:41.456246 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456250 | orchestrator | } 2026-01-03 00:02:41.456254 | orchestrator | 2026-01-03 00:02:41.456262 | orchestrator | # local_file.inventory will be created 2026-01-03 00:02:41.456266 | orchestrator | + resource "local_file" "inventory" { 2026-01-03 00:02:41.456270 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.456273 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.456277 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.456281 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.456284 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.456288 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.456292 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.456296 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.456300 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.456303 | orchestrator | + filename = "inventory.ci" 2026-01-03 00:02:41.456307 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456311 | orchestrator | } 2026-01-03 00:02:41.456316 | orchestrator | 2026-01-03 00:02:41.456320 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-03 00:02:41.456324 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-03 00:02:41.456327 | orchestrator | + content = (sensitive value) 2026-01-03 00:02:41.456331 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.456335 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.456338 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.456342 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.456346 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.456349 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.456353 | orchestrator | + directory_permission = "0700" 2026-01-03 00:02:41.456357 | orchestrator | + file_permission = "0600" 2026-01-03 00:02:41.456361 | orchestrator | + filename = ".id_rsa.ci" 2026-01-03 00:02:41.456364 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456368 | orchestrator | } 2026-01-03 00:02:41.456372 | orchestrator | 2026-01-03 00:02:41.456375 | orchestrator | # null_resource.node_semaphore will be created 2026-01-03 00:02:41.456379 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-03 00:02:41.456383 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456387 | orchestrator | } 2026-01-03 00:02:41.456391 | orchestrator | 2026-01-03 00:02:41.456394 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-03 00:02:41.456398 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-03 00:02:41.456402 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456406 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456409 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456413 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456417 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456420 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-03 00:02:41.456424 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456428 | orchestrator | + size = 80 2026-01-03 00:02:41.456431 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456435 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456439 | orchestrator | } 2026-01-03 00:02:41.456443 | orchestrator | 2026-01-03 00:02:41.456446 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-03 00:02:41.456450 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456454 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456457 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456461 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456470 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456474 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456477 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-03 00:02:41.456481 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456485 | orchestrator | + size = 80 2026-01-03 00:02:41.456489 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456492 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456496 | orchestrator | } 2026-01-03 00:02:41.456501 | orchestrator | 2026-01-03 00:02:41.456505 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-03 00:02:41.456509 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456513 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456516 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456520 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456524 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456528 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456531 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-03 00:02:41.456535 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456539 | orchestrator | + size = 80 2026-01-03 00:02:41.456542 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456546 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456550 | orchestrator | } 2026-01-03 00:02:41.456553 | orchestrator | 2026-01-03 00:02:41.456557 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-03 00:02:41.456561 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456565 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456568 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456572 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456576 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456579 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456583 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-03 00:02:41.456587 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456591 | orchestrator | + size = 80 2026-01-03 00:02:41.456594 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456598 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456602 | orchestrator | } 2026-01-03 00:02:41.456605 | orchestrator | 2026-01-03 00:02:41.456609 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-03 00:02:41.456613 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456616 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456620 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456624 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456628 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456631 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456637 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-03 00:02:41.456641 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456645 | orchestrator | + size = 80 2026-01-03 00:02:41.456649 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456652 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456656 | orchestrator | } 2026-01-03 00:02:41.456660 | orchestrator | 2026-01-03 00:02:41.456664 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-03 00:02:41.456667 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456671 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456675 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456678 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456685 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456689 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456693 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-03 00:02:41.456696 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456700 | orchestrator | + size = 80 2026-01-03 00:02:41.456704 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456707 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456711 | orchestrator | } 2026-01-03 00:02:41.456717 | orchestrator | 2026-01-03 00:02:41.456720 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-03 00:02:41.456724 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.456728 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456731 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456735 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456739 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.456742 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456746 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-03 00:02:41.456750 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456754 | orchestrator | + size = 80 2026-01-03 00:02:41.456757 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456761 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456765 | orchestrator | } 2026-01-03 00:02:41.456768 | orchestrator | 2026-01-03 00:02:41.456772 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-03 00:02:41.456776 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.456780 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456783 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456787 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456791 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456794 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-03 00:02:41.456798 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456802 | orchestrator | + size = 20 2026-01-03 00:02:41.456805 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456809 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456813 | orchestrator | } 2026-01-03 00:02:41.456817 | orchestrator | 2026-01-03 00:02:41.456820 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-03 00:02:41.456824 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.456828 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456832 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456835 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456839 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456843 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-03 00:02:41.456846 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456850 | orchestrator | + size = 20 2026-01-03 00:02:41.456854 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456857 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456861 | orchestrator | } 2026-01-03 00:02:41.456865 | orchestrator | 2026-01-03 00:02:41.456869 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-03 00:02:41.456901 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.456905 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456909 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456913 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456936 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456940 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-03 00:02:41.456944 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.456958 | orchestrator | + size = 20 2026-01-03 00:02:41.456962 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.456966 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.456970 | orchestrator | } 2026-01-03 00:02:41.456973 | orchestrator | 2026-01-03 00:02:41.456977 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-03 00:02:41.456981 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.456985 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.456988 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.456992 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.456996 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.456999 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-03 00:02:41.457003 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457007 | orchestrator | + size = 20 2026-01-03 00:02:41.457011 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457014 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457018 | orchestrator | } 2026-01-03 00:02:41.457024 | orchestrator | 2026-01-03 00:02:41.457028 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-03 00:02:41.457032 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.457035 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.457039 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457043 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457047 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.457050 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-03 00:02:41.457054 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457061 | orchestrator | + size = 20 2026-01-03 00:02:41.457065 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457068 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457072 | orchestrator | } 2026-01-03 00:02:41.457076 | orchestrator | 2026-01-03 00:02:41.457080 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-03 00:02:41.457083 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.457087 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.457091 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457094 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457098 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.457102 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-03 00:02:41.457105 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457109 | orchestrator | + size = 20 2026-01-03 00:02:41.457113 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457117 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457120 | orchestrator | } 2026-01-03 00:02:41.457124 | orchestrator | 2026-01-03 00:02:41.457128 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-03 00:02:41.457131 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.457135 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.457139 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457142 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457146 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.457150 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-03 00:02:41.457154 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457157 | orchestrator | + size = 20 2026-01-03 00:02:41.457161 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457164 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457168 | orchestrator | } 2026-01-03 00:02:41.457172 | orchestrator | 2026-01-03 00:02:41.457176 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-03 00:02:41.457180 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.457187 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.457191 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457195 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457198 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.457202 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-03 00:02:41.457206 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457209 | orchestrator | + size = 20 2026-01-03 00:02:41.457213 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457217 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457221 | orchestrator | } 2026-01-03 00:02:41.457224 | orchestrator | 2026-01-03 00:02:41.457228 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-03 00:02:41.457232 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.457236 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.457239 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457243 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457247 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.457250 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-03 00:02:41.457254 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457258 | orchestrator | + size = 20 2026-01-03 00:02:41.457261 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.457265 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.457269 | orchestrator | } 2026-01-03 00:02:41.457274 | orchestrator | 2026-01-03 00:02:41.457278 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-03 00:02:41.457282 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-03 00:02:41.457285 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.457289 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.457293 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.457297 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.457300 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457304 | orchestrator | + config_drive = true 2026-01-03 00:02:41.457308 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.457311 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.457315 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-03 00:02:41.457319 | orchestrator | + force_delete = false 2026-01-03 00:02:41.457322 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.457326 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457330 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.457333 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.457337 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.457341 | orchestrator | + name = "testbed-manager" 2026-01-03 00:02:41.457344 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.457348 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457352 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.457355 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.457359 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.457363 | orchestrator | + user_data = (sensitive value) 2026-01-03 00:02:41.457366 | orchestrator | 2026-01-03 00:02:41.457371 | orchestrator | + block_device { 2026-01-03 00:02:41.457374 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.457378 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.457385 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.457388 | orchestrator | + multiattach = false 2026-01-03 00:02:41.457392 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.457396 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457403 | orchestrator | } 2026-01-03 00:02:41.457407 | orchestrator | 2026-01-03 00:02:41.457410 | orchestrator | + network { 2026-01-03 00:02:41.457414 | orchestrator | + access_network = false 2026-01-03 00:02:41.457418 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.457422 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.457425 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.457429 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.457433 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.457437 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457440 | orchestrator | } 2026-01-03 00:02:41.457444 | orchestrator | } 2026-01-03 00:02:41.457449 | orchestrator | 2026-01-03 00:02:41.457453 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-03 00:02:41.457457 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.457461 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.457464 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.457468 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.457472 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.457476 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457479 | orchestrator | + config_drive = true 2026-01-03 00:02:41.457483 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.457487 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.457490 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.457494 | orchestrator | + force_delete = false 2026-01-03 00:02:41.457498 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.457501 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457505 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.457509 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.457512 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.457516 | orchestrator | + name = "testbed-node-0" 2026-01-03 00:02:41.457520 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.457523 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457527 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.457531 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.457535 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.457538 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.457542 | orchestrator | 2026-01-03 00:02:41.457546 | orchestrator | + block_device { 2026-01-03 00:02:41.457549 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.457553 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.457557 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.457561 | orchestrator | + multiattach = false 2026-01-03 00:02:41.457564 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.457568 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457572 | orchestrator | } 2026-01-03 00:02:41.457575 | orchestrator | 2026-01-03 00:02:41.457579 | orchestrator | + network { 2026-01-03 00:02:41.457583 | orchestrator | + access_network = false 2026-01-03 00:02:41.457587 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.457590 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.457594 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.457598 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.457601 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.457605 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457609 | orchestrator | } 2026-01-03 00:02:41.457613 | orchestrator | } 2026-01-03 00:02:41.457618 | orchestrator | 2026-01-03 00:02:41.457622 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-03 00:02:41.457625 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.457629 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.457636 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.457640 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.457644 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.457647 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457651 | orchestrator | + config_drive = true 2026-01-03 00:02:41.457655 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.457658 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.457662 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.457666 | orchestrator | + force_delete = false 2026-01-03 00:02:41.457669 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.457673 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457677 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.457681 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.457684 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.457688 | orchestrator | + name = "testbed-node-1" 2026-01-03 00:02:41.457692 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.457695 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457699 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.457703 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.457706 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.457710 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.457714 | orchestrator | 2026-01-03 00:02:41.457717 | orchestrator | + block_device { 2026-01-03 00:02:41.457721 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.457725 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.457728 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.457732 | orchestrator | + multiattach = false 2026-01-03 00:02:41.457736 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.457740 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457743 | orchestrator | } 2026-01-03 00:02:41.457747 | orchestrator | 2026-01-03 00:02:41.457751 | orchestrator | + network { 2026-01-03 00:02:41.457754 | orchestrator | + access_network = false 2026-01-03 00:02:41.457758 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.457762 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.457765 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.457769 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.457773 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.457777 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457780 | orchestrator | } 2026-01-03 00:02:41.457784 | orchestrator | } 2026-01-03 00:02:41.457790 | orchestrator | 2026-01-03 00:02:41.457794 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-03 00:02:41.457797 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.457801 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.457805 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.457809 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.457812 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.457819 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.457822 | orchestrator | + config_drive = true 2026-01-03 00:02:41.457826 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.457830 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.457834 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.457837 | orchestrator | + force_delete = false 2026-01-03 00:02:41.457841 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.457845 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.457848 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.457855 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.457859 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.457863 | orchestrator | + name = "testbed-node-2" 2026-01-03 00:02:41.457866 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.457880 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.457884 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.457887 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.457891 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.457895 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.457899 | orchestrator | 2026-01-03 00:02:41.457902 | orchestrator | + block_device { 2026-01-03 00:02:41.457906 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.457910 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.457913 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.457917 | orchestrator | + multiattach = false 2026-01-03 00:02:41.457921 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.457924 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457928 | orchestrator | } 2026-01-03 00:02:41.457932 | orchestrator | 2026-01-03 00:02:41.457936 | orchestrator | + network { 2026-01-03 00:02:41.457939 | orchestrator | + access_network = false 2026-01-03 00:02:41.457943 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.457947 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.457950 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.457954 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.457958 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.457961 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.457965 | orchestrator | } 2026-01-03 00:02:41.457969 | orchestrator | } 2026-01-03 00:02:41.457972 | orchestrator | 2026-01-03 00:02:41.457976 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-03 00:02:41.457980 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.457983 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.457987 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.457991 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.457995 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.457998 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.458002 | orchestrator | + config_drive = true 2026-01-03 00:02:41.458006 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.458009 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.458038 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.458043 | orchestrator | + force_delete = false 2026-01-03 00:02:41.458047 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.458050 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458054 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.458059 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.458062 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.458066 | orchestrator | + name = "testbed-node-3" 2026-01-03 00:02:41.458070 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.458073 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458077 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.458081 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.458085 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.458088 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.458092 | orchestrator | 2026-01-03 00:02:41.458096 | orchestrator | + block_device { 2026-01-03 00:02:41.458103 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.458107 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.458111 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.458118 | orchestrator | + multiattach = false 2026-01-03 00:02:41.458121 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.458133 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458137 | orchestrator | } 2026-01-03 00:02:41.458141 | orchestrator | 2026-01-03 00:02:41.458144 | orchestrator | + network { 2026-01-03 00:02:41.458148 | orchestrator | + access_network = false 2026-01-03 00:02:41.458152 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.458155 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.458159 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.458163 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.458167 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.458170 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458174 | orchestrator | } 2026-01-03 00:02:41.458178 | orchestrator | } 2026-01-03 00:02:41.458184 | orchestrator | 2026-01-03 00:02:41.458188 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-03 00:02:41.458191 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.458195 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.458199 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.458203 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.458207 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.458210 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.458214 | orchestrator | + config_drive = true 2026-01-03 00:02:41.458218 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.458222 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.458225 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.458229 | orchestrator | + force_delete = false 2026-01-03 00:02:41.458233 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.458237 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458240 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.458244 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.458248 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.458251 | orchestrator | + name = "testbed-node-4" 2026-01-03 00:02:41.458255 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.458259 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458263 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.458266 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.458270 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.458274 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.458278 | orchestrator | 2026-01-03 00:02:41.458282 | orchestrator | + block_device { 2026-01-03 00:02:41.458285 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.458289 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.458293 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.458297 | orchestrator | + multiattach = false 2026-01-03 00:02:41.458300 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.458304 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458308 | orchestrator | } 2026-01-03 00:02:41.458312 | orchestrator | 2026-01-03 00:02:41.458316 | orchestrator | + network { 2026-01-03 00:02:41.458319 | orchestrator | + access_network = false 2026-01-03 00:02:41.458323 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.458327 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.458331 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.458334 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.458338 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.458342 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458346 | orchestrator | } 2026-01-03 00:02:41.458349 | orchestrator | } 2026-01-03 00:02:41.458359 | orchestrator | 2026-01-03 00:02:41.458363 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-03 00:02:41.458367 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.458371 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.458374 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.458378 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.458382 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.458385 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.458389 | orchestrator | + config_drive = true 2026-01-03 00:02:41.458393 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.458397 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.458400 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.458404 | orchestrator | + force_delete = false 2026-01-03 00:02:41.458411 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.458415 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458419 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.458422 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.458426 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.458430 | orchestrator | + name = "testbed-node-5" 2026-01-03 00:02:41.458434 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.458437 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458441 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.458445 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.458448 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.458452 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.458456 | orchestrator | 2026-01-03 00:02:41.458460 | orchestrator | + block_device { 2026-01-03 00:02:41.458463 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.458467 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.458471 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.458475 | orchestrator | + multiattach = false 2026-01-03 00:02:41.458478 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.458482 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458486 | orchestrator | } 2026-01-03 00:02:41.458490 | orchestrator | 2026-01-03 00:02:41.458493 | orchestrator | + network { 2026-01-03 00:02:41.458497 | orchestrator | + access_network = false 2026-01-03 00:02:41.458501 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.458505 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.458508 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.458512 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.458516 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.458520 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.458523 | orchestrator | } 2026-01-03 00:02:41.458527 | orchestrator | } 2026-01-03 00:02:41.458531 | orchestrator | 2026-01-03 00:02:41.458535 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-03 00:02:41.458539 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-03 00:02:41.458542 | orchestrator | + fingerprint = (known after apply) 2026-01-03 00:02:41.458546 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458550 | orchestrator | + name = "testbed" 2026-01-03 00:02:41.458554 | orchestrator | + private_key = (sensitive value) 2026-01-03 00:02:41.458557 | orchestrator | + public_key = (known after apply) 2026-01-03 00:02:41.458561 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458565 | orchestrator | + user_id = (known after apply) 2026-01-03 00:02:41.458569 | orchestrator | } 2026-01-03 00:02:41.458572 | orchestrator | 2026-01-03 00:02:41.458576 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-03 00:02:41.458580 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458588 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458592 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458596 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458600 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458603 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458607 | orchestrator | } 2026-01-03 00:02:41.458611 | orchestrator | 2026-01-03 00:02:41.458615 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-03 00:02:41.458618 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458622 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458626 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458630 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458633 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458637 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458641 | orchestrator | } 2026-01-03 00:02:41.458645 | orchestrator | 2026-01-03 00:02:41.458649 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-03 00:02:41.458652 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458656 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458660 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458664 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458667 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458671 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458675 | orchestrator | } 2026-01-03 00:02:41.458681 | orchestrator | 2026-01-03 00:02:41.458685 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-03 00:02:41.458689 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458693 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458696 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458700 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458704 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458708 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458711 | orchestrator | } 2026-01-03 00:02:41.458715 | orchestrator | 2026-01-03 00:02:41.458719 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-03 00:02:41.458723 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458727 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458730 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458734 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458740 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458744 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458748 | orchestrator | } 2026-01-03 00:02:41.458752 | orchestrator | 2026-01-03 00:02:41.458756 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-03 00:02:41.458759 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458763 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458767 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458771 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458774 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458778 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458782 | orchestrator | } 2026-01-03 00:02:41.458786 | orchestrator | 2026-01-03 00:02:41.458790 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-03 00:02:41.458793 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458797 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458801 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458805 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458808 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458816 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458820 | orchestrator | } 2026-01-03 00:02:41.458823 | orchestrator | 2026-01-03 00:02:41.458827 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-03 00:02:41.458831 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458835 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458839 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458842 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458846 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458850 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458854 | orchestrator | } 2026-01-03 00:02:41.458857 | orchestrator | 2026-01-03 00:02:41.458861 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-03 00:02:41.458865 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.458869 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.458885 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458889 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.458893 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458897 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.458901 | orchestrator | } 2026-01-03 00:02:41.458904 | orchestrator | 2026-01-03 00:02:41.458908 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-03 00:02:41.458913 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-03 00:02:41.458917 | orchestrator | + fixed_ip = (known after apply) 2026-01-03 00:02:41.458921 | orchestrator | + floating_ip = (known after apply) 2026-01-03 00:02:41.458924 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458928 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.458932 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458936 | orchestrator | } 2026-01-03 00:02:41.458939 | orchestrator | 2026-01-03 00:02:41.458943 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-03 00:02:41.458947 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-03 00:02:41.458951 | orchestrator | + address = (known after apply) 2026-01-03 00:02:41.458955 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.458958 | orchestrator | + dns_domain = (known after apply) 2026-01-03 00:02:41.458962 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.458966 | orchestrator | + fixed_ip = (known after apply) 2026-01-03 00:02:41.458970 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.458973 | orchestrator | + pool = "public" 2026-01-03 00:02:41.458977 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.458981 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.458985 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.458988 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.458992 | orchestrator | } 2026-01-03 00:02:41.458996 | orchestrator | 2026-01-03 00:02:41.459000 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-03 00:02:41.459003 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-03 00:02:41.459007 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459011 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459015 | orchestrator | + availability_zone_hints = [ 2026-01-03 00:02:41.459018 | orchestrator | + "nova", 2026-01-03 00:02:41.459022 | orchestrator | ] 2026-01-03 00:02:41.459026 | orchestrator | + dns_domain = (known after apply) 2026-01-03 00:02:41.459030 | orchestrator | + external = (known after apply) 2026-01-03 00:02:41.459034 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459037 | orchestrator | + mtu = (known after apply) 2026-01-03 00:02:41.459041 | orchestrator | + name = "net-testbed-management" 2026-01-03 00:02:41.459045 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459054 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459057 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459061 | orchestrator | + shared = (known after apply) 2026-01-03 00:02:41.459069 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459073 | orchestrator | + transparent_vlan = (known after apply) 2026-01-03 00:02:41.459077 | orchestrator | 2026-01-03 00:02:41.459081 | orchestrator | + segments (known after apply) 2026-01-03 00:02:41.459085 | orchestrator | } 2026-01-03 00:02:41.459088 | orchestrator | 2026-01-03 00:02:41.459092 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-03 00:02:41.459096 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-03 00:02:41.459100 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459104 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459107 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459114 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459118 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459121 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459125 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459129 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459132 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459136 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459140 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459144 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459147 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459151 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459155 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459159 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459162 | orchestrator | 2026-01-03 00:02:41.459166 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459170 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459174 | orchestrator | } 2026-01-03 00:02:41.459177 | orchestrator | 2026-01-03 00:02:41.459181 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459185 | orchestrator | 2026-01-03 00:02:41.459189 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459193 | orchestrator | + ip_address = "192.168.16.5" 2026-01-03 00:02:41.459197 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459200 | orchestrator | } 2026-01-03 00:02:41.459204 | orchestrator | } 2026-01-03 00:02:41.459208 | orchestrator | 2026-01-03 00:02:41.459212 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-03 00:02:41.459215 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459219 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459223 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459227 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459231 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459234 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459238 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459242 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459246 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459250 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459254 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459257 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459261 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459265 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459269 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459275 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459279 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459283 | orchestrator | 2026-01-03 00:02:41.459286 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459290 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.459294 | orchestrator | } 2026-01-03 00:02:41.459298 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459302 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459305 | orchestrator | } 2026-01-03 00:02:41.459309 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459313 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.459317 | orchestrator | } 2026-01-03 00:02:41.459320 | orchestrator | 2026-01-03 00:02:41.459324 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459328 | orchestrator | 2026-01-03 00:02:41.459332 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459336 | orchestrator | + ip_address = "192.168.16.10" 2026-01-03 00:02:41.459339 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459343 | orchestrator | } 2026-01-03 00:02:41.459347 | orchestrator | } 2026-01-03 00:02:41.459351 | orchestrator | 2026-01-03 00:02:41.459354 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-03 00:02:41.459358 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459362 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459366 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459370 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459373 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459377 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459381 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459385 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459388 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459392 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459396 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459400 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459403 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459407 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459411 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459415 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459418 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459422 | orchestrator | 2026-01-03 00:02:41.459426 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459430 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.459434 | orchestrator | } 2026-01-03 00:02:41.459437 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459441 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459445 | orchestrator | } 2026-01-03 00:02:41.459449 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459453 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.459456 | orchestrator | } 2026-01-03 00:02:41.459460 | orchestrator | 2026-01-03 00:02:41.459464 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459468 | orchestrator | 2026-01-03 00:02:41.459472 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459478 | orchestrator | + ip_address = "192.168.16.11" 2026-01-03 00:02:41.459482 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459486 | orchestrator | } 2026-01-03 00:02:41.459489 | orchestrator | } 2026-01-03 00:02:41.459493 | orchestrator | 2026-01-03 00:02:41.459497 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-03 00:02:41.459501 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459505 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459508 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459512 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459516 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459522 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459526 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459530 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459534 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459540 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459544 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459547 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459551 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459555 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459559 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459563 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459566 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459570 | orchestrator | 2026-01-03 00:02:41.459574 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459578 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.459582 | orchestrator | } 2026-01-03 00:02:41.459585 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459589 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459593 | orchestrator | } 2026-01-03 00:02:41.459597 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459600 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.459604 | orchestrator | } 2026-01-03 00:02:41.459608 | orchestrator | 2026-01-03 00:02:41.459612 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459615 | orchestrator | 2026-01-03 00:02:41.459619 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459623 | orchestrator | + ip_address = "192.168.16.12" 2026-01-03 00:02:41.459627 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459631 | orchestrator | } 2026-01-03 00:02:41.459634 | orchestrator | } 2026-01-03 00:02:41.459638 | orchestrator | 2026-01-03 00:02:41.459642 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-03 00:02:41.459646 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459649 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459653 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459657 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459661 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459665 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459668 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459672 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459676 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459680 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459683 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459687 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459691 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459695 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459698 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459702 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459706 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459710 | orchestrator | 2026-01-03 00:02:41.459713 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459717 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.459721 | orchestrator | } 2026-01-03 00:02:41.459725 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459729 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459732 | orchestrator | } 2026-01-03 00:02:41.459736 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459740 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.459744 | orchestrator | } 2026-01-03 00:02:41.459747 | orchestrator | 2026-01-03 00:02:41.459754 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459758 | orchestrator | 2026-01-03 00:02:41.459762 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459766 | orchestrator | + ip_address = "192.168.16.13" 2026-01-03 00:02:41.459769 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459773 | orchestrator | } 2026-01-03 00:02:41.459777 | orchestrator | } 2026-01-03 00:02:41.459781 | orchestrator | 2026-01-03 00:02:41.459784 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-03 00:02:41.459788 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459792 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459796 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459800 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459803 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459807 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459811 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459815 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459818 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459822 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459826 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459830 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459833 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459837 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.459841 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.459845 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.459848 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.459852 | orchestrator | 2026-01-03 00:02:41.459856 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459860 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.459864 | orchestrator | } 2026-01-03 00:02:41.459868 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459884 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.459888 | orchestrator | } 2026-01-03 00:02:41.459892 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.459901 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.459905 | orchestrator | } 2026-01-03 00:02:41.459908 | orchestrator | 2026-01-03 00:02:41.459912 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.459916 | orchestrator | 2026-01-03 00:02:41.459920 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.459923 | orchestrator | + ip_address = "192.168.16.14" 2026-01-03 00:02:41.459927 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.459931 | orchestrator | } 2026-01-03 00:02:41.459934 | orchestrator | } 2026-01-03 00:02:41.459938 | orchestrator | 2026-01-03 00:02:41.459942 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-03 00:02:41.459946 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.459949 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.459953 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.459957 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.459961 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.459964 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.459968 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.459972 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.459975 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.459979 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.459983 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.459987 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.459990 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.459994 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.460001 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460005 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.460008 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460012 | orchestrator | 2026-01-03 00:02:41.460016 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.460020 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.460023 | orchestrator | } 2026-01-03 00:02:41.460027 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.460031 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.460035 | orchestrator | } 2026-01-03 00:02:41.460038 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.460042 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.460046 | orchestrator | } 2026-01-03 00:02:41.460049 | orchestrator | 2026-01-03 00:02:41.460056 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.460060 | orchestrator | 2026-01-03 00:02:41.460063 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.460067 | orchestrator | + ip_address = "192.168.16.15" 2026-01-03 00:02:41.460071 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.460074 | orchestrator | } 2026-01-03 00:02:41.460078 | orchestrator | } 2026-01-03 00:02:41.460082 | orchestrator | 2026-01-03 00:02:41.460086 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-03 00:02:41.460089 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-03 00:02:41.460093 | orchestrator | + force_destroy = false 2026-01-03 00:02:41.460097 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460101 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.460104 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460108 | orchestrator | + router_id = (known after apply) 2026-01-03 00:02:41.460111 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.460115 | orchestrator | } 2026-01-03 00:02:41.460119 | orchestrator | 2026-01-03 00:02:41.460123 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-03 00:02:41.460126 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-03 00:02:41.460130 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.460134 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.460137 | orchestrator | + availability_zone_hints = [ 2026-01-03 00:02:41.460141 | orchestrator | + "nova", 2026-01-03 00:02:41.460145 | orchestrator | ] 2026-01-03 00:02:41.460149 | orchestrator | + distributed = (known after apply) 2026-01-03 00:02:41.460152 | orchestrator | + enable_snat = (known after apply) 2026-01-03 00:02:41.460156 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-03 00:02:41.460160 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-03 00:02:41.460164 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460167 | orchestrator | + name = "testbed" 2026-01-03 00:02:41.460171 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460175 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460178 | orchestrator | 2026-01-03 00:02:41.460182 | orchestrator | + external_fixed_ip (known after apply) 2026-01-03 00:02:41.460186 | orchestrator | } 2026-01-03 00:02:41.460190 | orchestrator | 2026-01-03 00:02:41.460193 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-03 00:02:41.460197 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-03 00:02:41.460201 | orchestrator | + description = "ssh" 2026-01-03 00:02:41.460205 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.460209 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.460212 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460216 | orchestrator | + port_range_max = 22 2026-01-03 00:02:41.460220 | orchestrator | + port_range_min = 22 2026-01-03 00:02:41.460223 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.460227 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460233 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.460237 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.460240 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.460244 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.460248 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460252 | orchestrator | } 2026-01-03 00:02:41.460255 | orchestrator | 2026-01-03 00:02:41.460259 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-03 00:02:41.460263 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-03 00:02:41.460266 | orchestrator | + description = "wireguard" 2026-01-03 00:02:41.460270 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.460274 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.460277 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460281 | orchestrator | + port_range_max = 51820 2026-01-03 00:02:41.460285 | orchestrator | + port_range_min = 51820 2026-01-03 00:02:41.460289 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.460295 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460299 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.460303 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.460307 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.460310 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.460314 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460318 | orchestrator | } 2026-01-03 00:02:41.460322 | orchestrator | 2026-01-03 00:02:41.460325 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-03 00:02:41.460329 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-03 00:02:41.460333 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.460336 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.460340 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460344 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.460347 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460351 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.460355 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.460359 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-03 00:02:41.460362 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.460366 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460370 | orchestrator | } 2026-01-03 00:02:41.460373 | orchestrator | 2026-01-03 00:02:41.460377 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-03 00:02:41.460381 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-03 00:02:41.460385 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.460388 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.460392 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.460396 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.460399 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.460403 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.460407 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.460410 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-03 00:02:41.460414 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.460418 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.460422 | orchestrator | } 2026-01-03 00:02:41.463794 | orchestrator | 2026-01-03 00:02:41.463838 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-03 00:02:41.463854 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-03 00:02:41.463859 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.463864 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.463869 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.463885 | orchestrator | + protocol = "icmp" 2026-01-03 00:02:41.463889 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.463893 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.463897 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.463900 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.463904 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.463908 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.463912 | orchestrator | } 2026-01-03 00:02:41.463920 | orchestrator | 2026-01-03 00:02:41.463924 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-03 00:02:41.463928 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-03 00:02:41.463932 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.463936 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.463940 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.463944 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.463948 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.463951 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.463960 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.463964 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.463968 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.463972 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.463976 | orchestrator | } 2026-01-03 00:02:41.463980 | orchestrator | 2026-01-03 00:02:41.463984 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-03 00:02:41.463987 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-03 00:02:41.463991 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.463995 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.463999 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464002 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.464006 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464010 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.464014 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.464017 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.464021 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.464025 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464029 | orchestrator | } 2026-01-03 00:02:41.464034 | orchestrator | 2026-01-03 00:02:41.464038 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-03 00:02:41.464042 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-03 00:02:41.464045 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.464051 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.464055 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464059 | orchestrator | + protocol = "icmp" 2026-01-03 00:02:41.464063 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464066 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.464070 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.464074 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.464078 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.464082 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464089 | orchestrator | } 2026-01-03 00:02:41.464094 | orchestrator | 2026-01-03 00:02:41.464098 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-03 00:02:41.464102 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-03 00:02:41.464106 | orchestrator | + description = "vrrp" 2026-01-03 00:02:41.464110 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.464113 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.464117 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464121 | orchestrator | + protocol = "112" 2026-01-03 00:02:41.464125 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464128 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.464132 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.464136 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.464139 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.464143 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464147 | orchestrator | } 2026-01-03 00:02:41.464151 | orchestrator | 2026-01-03 00:02:41.464155 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-03 00:02:41.464159 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-03 00:02:41.464163 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.464167 | orchestrator | + description = "management security group" 2026-01-03 00:02:41.464171 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464174 | orchestrator | + name = "testbed-management" 2026-01-03 00:02:41.464178 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464182 | orchestrator | + stateful = (known after apply) 2026-01-03 00:02:41.464186 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464189 | orchestrator | } 2026-01-03 00:02:41.464194 | orchestrator | 2026-01-03 00:02:41.464198 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-03 00:02:41.464202 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-03 00:02:41.464206 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.464210 | orchestrator | + description = "node security group" 2026-01-03 00:02:41.464213 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464217 | orchestrator | + name = "testbed-node" 2026-01-03 00:02:41.464221 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464225 | orchestrator | + stateful = (known after apply) 2026-01-03 00:02:41.464229 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464232 | orchestrator | } 2026-01-03 00:02:41.464238 | orchestrator | 2026-01-03 00:02:41.464241 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-03 00:02:41.464245 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-03 00:02:41.464249 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.464253 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-03 00:02:41.464257 | orchestrator | + dns_nameservers = [ 2026-01-03 00:02:41.464260 | orchestrator | + "8.8.8.8", 2026-01-03 00:02:41.464264 | orchestrator | + "9.9.9.9", 2026-01-03 00:02:41.464268 | orchestrator | ] 2026-01-03 00:02:41.464272 | orchestrator | + enable_dhcp = true 2026-01-03 00:02:41.464276 | orchestrator | + gateway_ip = (known after apply) 2026-01-03 00:02:41.464280 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464283 | orchestrator | + ip_version = 4 2026-01-03 00:02:41.464287 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-03 00:02:41.464291 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-03 00:02:41.464295 | orchestrator | + name = "subnet-testbed-management" 2026-01-03 00:02:41.464299 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.464302 | orchestrator | + no_gateway = false 2026-01-03 00:02:41.464306 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.464310 | orchestrator | + service_types = (known after apply) 2026-01-03 00:02:41.464317 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.464321 | orchestrator | 2026-01-03 00:02:41.464325 | orchestrator | + allocation_pool { 2026-01-03 00:02:41.464328 | orchestrator | + end = "192.168.31.250" 2026-01-03 00:02:41.464332 | orchestrator | + start = "192.168.31.200" 2026-01-03 00:02:41.464336 | orchestrator | } 2026-01-03 00:02:41.464340 | orchestrator | } 2026-01-03 00:02:41.464345 | orchestrator | 2026-01-03 00:02:41.464349 | orchestrator | # terraform_data.image will be created 2026-01-03 00:02:41.464353 | orchestrator | + resource "terraform_data" "image" { 2026-01-03 00:02:41.464356 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464360 | orchestrator | + input = "Ubuntu 24.04" 2026-01-03 00:02:41.464364 | orchestrator | + output = (known after apply) 2026-01-03 00:02:41.464371 | orchestrator | } 2026-01-03 00:02:41.464375 | orchestrator | 2026-01-03 00:02:41.464379 | orchestrator | # terraform_data.image_node will be created 2026-01-03 00:02:41.464382 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-03 00:02:41.464386 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.464390 | orchestrator | + input = "Ubuntu 24.04" 2026-01-03 00:02:41.464394 | orchestrator | + output = (known after apply) 2026-01-03 00:02:41.464397 | orchestrator | } 2026-01-03 00:02:41.464401 | orchestrator | 2026-01-03 00:02:41.464405 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-03 00:02:41.464409 | orchestrator | 2026-01-03 00:02:41.464412 | orchestrator | Changes to Outputs: 2026-01-03 00:02:41.464416 | orchestrator | + manager_address = (sensitive value) 2026-01-03 00:02:41.464420 | orchestrator | + private_key = (sensitive value) 2026-01-03 00:02:41.561415 | orchestrator | terraform_data.image_node: Creating... 2026-01-03 00:02:41.561741 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=233535fe-8639-9f44-a6f8-9394b0227aa1] 2026-01-03 00:02:41.689473 | orchestrator | terraform_data.image: Creating... 2026-01-03 00:02:41.689927 | orchestrator | terraform_data.image: Creation complete after 0s [id=c0af4525-819a-e898-dc61-ae413afd4e79] 2026-01-03 00:02:41.708491 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-03 00:02:41.715978 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-03 00:02:41.717505 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-03 00:02:41.717552 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-03 00:02:41.720433 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-03 00:02:41.720482 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-03 00:02:41.726085 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-03 00:02:41.727491 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-03 00:02:41.728661 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-03 00:02:41.751057 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-03 00:02:43.487910 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-03 00:02:44.809104 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-03 00:02:44.809160 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-03 00:02:44.809174 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-03 00:02:44.809187 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 2s [id=testbed] 2026-01-03 00:02:44.809199 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-03 00:02:44.809212 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=7ba897b2-b0b6-49a5-a4c6-64ccbeb70c43] 2026-01-03 00:02:44.809224 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-03 00:02:46.693842 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 5s [id=f493d531-f14a-40ab-852d-4e184520cb25] 2026-01-03 00:02:46.698772 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 5s [id=2050ce1a-3081-4edd-a04d-3576bece8338] 2026-01-03 00:02:46.700471 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-03 00:02:46.702720 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 5s [id=92ee9088-f522-4da5-b9de-cc8e73fea3b4] 2026-01-03 00:02:46.704236 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-03 00:02:46.710627 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-03 00:02:46.749241 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 5s [id=18deaf14-926e-4cd7-8e92-2fabf4ecc6e0] 2026-01-03 00:02:46.762077 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 5s [id=b0c096f4-c40f-4db0-bd86-40b4e9f72c6c] 2026-01-03 00:02:46.768485 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-03 00:02:46.770673 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-03 00:02:46.821514 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 5s [id=75764784-fbeb-447b-add5-f3485e6783bd] 2026-01-03 00:02:46.828833 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=c0ea832c-91ed-4e4f-b69a-de1dd6828a04] 2026-01-03 00:02:46.841509 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-03 00:02:46.843209 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-03 00:02:46.844194 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=deb598c2-f543-4f9b-b077-315ce19fa743] 2026-01-03 00:02:46.847950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5] 2026-01-03 00:02:46.850684 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=1b4472db0d6555a04e4d583fd66d90372ca852ef] 2026-01-03 00:02:46.855367 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-03 00:02:46.855533 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-03 00:02:46.859484 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=4067deb1aa4041f1b67eabf3e67706e5f8e54aea] 2026-01-03 00:02:47.622963 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=4e37b40c-ccdd-49d8-8a53-1ef7c745566e] 2026-01-03 00:02:47.906128 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=86de043c-4010-4cce-b2c1-601e74e6fd02] 2026-01-03 00:02:47.914444 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-03 00:02:50.096169 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0] 2026-01-03 00:02:50.113242 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=480c0bd9-4479-4e9b-bee3-e1a1c18f46c7] 2026-01-03 00:02:50.225323 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=20e2a322-8c31-40eb-9f80-64f14276ce8b] 2026-01-03 00:02:50.234007 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=7bbcd537-85f1-4819-90f6-f7f08a06c207] 2026-01-03 00:02:50.256748 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba] 2026-01-03 00:02:50.259465 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=2bf45dd0-3b3b-4bfc-8f32-ca0729857a93] 2026-01-03 00:02:53.851032 | orchestrator | openstack_networking_router_v2.router: Creation complete after 6s [id=e67d3385-bad6-4b10-9257-18050c19352e] 2026-01-03 00:02:53.862061 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-03 00:02:53.862143 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-03 00:02:53.866591 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-03 00:02:54.155709 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f7af63e1-fc1b-4654-92cc-4dace4f6f471] 2026-01-03 00:02:54.171933 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-03 00:02:54.172531 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-03 00:02:54.173753 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-03 00:02:54.174249 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-03 00:02:54.174607 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-03 00:02:54.176104 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-03 00:02:54.184043 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-03 00:02:54.197600 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-03 00:02:54.250839 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4bc2d97f-2064-4834-ad26-c1c270eb5880] 2026-01-03 00:02:54.266431 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-03 00:02:54.590819 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=755fc333-849a-4e7c-bb4a-ebe5820cac0f] 2026-01-03 00:02:54.601620 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-03 00:02:54.799234 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=af9eaad6-3752-4383-9a0e-5a5f5d376e3c] 2026-01-03 00:02:54.805116 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-03 00:02:54.992131 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=6ec9e1b1-201d-489e-b84b-6bcb14e2eb54] 2026-01-03 00:02:54.997901 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-03 00:02:55.113943 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=090d4d6a-49ab-42fa-b8f3-96008a5ceb1c] 2026-01-03 00:02:55.120293 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-03 00:02:55.161378 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=e898c323-da3d-4061-92de-d57e839fffee] 2026-01-03 00:02:55.168775 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-03 00:02:55.269571 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e3e70582-3db2-42af-b235-f0e105bfe1ee] 2026-01-03 00:02:55.281520 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-03 00:02:55.298530 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=db23a1b0-34bb-4b13-bd37-f59ea5cb3430] 2026-01-03 00:02:55.302469 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-03 00:02:55.350822 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=dd0adf9e-f6c6-4f87-9cd0-9f206d233ce4] 2026-01-03 00:02:55.355513 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=25e953b5-2947-4fff-bdad-f60c78827d00] 2026-01-03 00:02:55.548419 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=ebb5b7a7-6723-4886-b84d-52ba5be82e2a] 2026-01-03 00:02:55.548562 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=e4b026a2-acc0-47d3-93b1-cf4477c3bbb7] 2026-01-03 00:02:55.569239 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=3d68ded7-28c6-4a26-9de7-86c58b49c304] 2026-01-03 00:02:55.833307 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=6798d6a6-d5a0-4565-a1ab-67bb977c91c1] 2026-01-03 00:02:55.886216 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=e399d560-f444-4957-a3ca-0b9266d3839d] 2026-01-03 00:02:56.027411 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=ba80ce6d-322e-4824-b721-4e87465eefa4] 2026-01-03 00:02:56.116546 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=8be81dd0-f4fb-43bd-9f97-d7a8202cdb12] 2026-01-03 00:02:59.060007 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=860f69d9-bb90-4bca-8860-cff7218d7274] 2026-01-03 00:02:59.094758 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-03 00:02:59.095490 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-03 00:02:59.096301 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-03 00:02:59.102483 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-03 00:02:59.102953 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-03 00:02:59.118422 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-03 00:02:59.118619 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-03 00:03:01.734410 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=f4a8ebb4-f815-4b11-8d6f-99bc9dbc652b] 2026-01-03 00:03:01.750570 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-03 00:03:01.752288 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-03 00:03:01.756989 | orchestrator | local_file.inventory: Creating... 2026-01-03 00:03:01.757339 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=28b2090da92a31014fb23d60c2c51eb0114930e7] 2026-01-03 00:03:01.762889 | orchestrator | local_file.inventory: Creation complete after 0s [id=ef43b84d4e64238741129086021a7990eb924346] 2026-01-03 00:03:02.484961 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f4a8ebb4-f815-4b11-8d6f-99bc9dbc652b] 2026-01-03 00:03:09.096265 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-03 00:03:09.097491 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-03 00:03:09.103082 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-03 00:03:09.104445 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-03 00:03:09.119820 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-03 00:03:09.120046 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-03 00:03:19.096541 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-03 00:03:19.097594 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-03 00:03:19.103942 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-03 00:03:19.105438 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-03 00:03:19.121094 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-03 00:03:19.121211 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-03 00:03:29.106398 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-03 00:03:29.106496 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-03 00:03:29.106517 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-03 00:03:29.106526 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-03 00:03:29.122071 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-03 00:03:29.122172 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-03 00:03:39.116286 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-03 00:03:39.116390 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-03 00:03:39.116401 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-03 00:03:39.116420 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-03 00:03:39.122610 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-03 00:03:39.122710 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-03 00:03:49.125306 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-01-03 00:03:49.125475 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-03 00:03:49.125511 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-03 00:03:49.125540 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-03 00:03:49.125559 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-01-03 00:03:49.125588 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-03 00:03:59.135316 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.135422 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.135430 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.135434 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.135438 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.135443 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-01-03 00:03:59.817338 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=e7fd09fa-af3f-47c6-8388-462fa3f6cf0b] 2026-01-03 00:03:59.934980 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m1s [id=318e4ca3-7e0f-451f-8a01-f781f5707c74] 2026-01-03 00:04:00.006416 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=06fae456-7f4c-4701-86a2-5e42a99a8481] 2026-01-03 00:04:00.032090 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m1s [id=87f25dc0-11ad-4753-b6ac-2d7cc183764e] 2026-01-03 00:04:00.112285 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=6bfec7c0-56c0-4c97-ba0a-398cc31444c3] 2026-01-03 00:04:09.142279 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m10s elapsed] 2026-01-03 00:04:09.948045 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m11s [id=5cf8ba30-fe33-4b48-b0e4-7b3e78bfe28c] 2026-01-03 00:04:09.995465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-03 00:04:10.002685 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-03 00:04:10.009134 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2100317983364460498] 2026-01-03 00:04:10.010622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-03 00:04:10.010918 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-03 00:04:10.011433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-03 00:04:10.016453 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-03 00:04:10.042519 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-03 00:04:10.063052 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-03 00:04:10.068857 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-03 00:04:10.074125 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-03 00:04:10.081523 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-03 00:04:13.385856 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=5cf8ba30-fe33-4b48-b0e4-7b3e78bfe28c/f493d531-f14a-40ab-852d-4e184520cb25] 2026-01-03 00:04:13.422545 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=5cf8ba30-fe33-4b48-b0e4-7b3e78bfe28c/deb598c2-f543-4f9b-b077-315ce19fa743] 2026-01-03 00:04:13.441245 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=06fae456-7f4c-4701-86a2-5e42a99a8481/75764784-fbeb-447b-add5-f3485e6783bd] 2026-01-03 00:04:13.458548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=6bfec7c0-56c0-4c97-ba0a-398cc31444c3/92ee9088-f522-4da5-b9de-cc8e73fea3b4] 2026-01-03 00:04:13.480400 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=06fae456-7f4c-4701-86a2-5e42a99a8481/b0c096f4-c40f-4db0-bd86-40b4e9f72c6c] 2026-01-03 00:04:13.509935 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=6bfec7c0-56c0-4c97-ba0a-398cc31444c3/64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5] 2026-01-03 00:04:19.558409 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=5cf8ba30-fe33-4b48-b0e4-7b3e78bfe28c/2050ce1a-3081-4edd-a04d-3576bece8338] 2026-01-03 00:04:19.585131 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=06fae456-7f4c-4701-86a2-5e42a99a8481/18deaf14-926e-4cd7-8e92-2fabf4ecc6e0] 2026-01-03 00:04:19.600762 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=6bfec7c0-56c0-4c97-ba0a-398cc31444c3/c0ea832c-91ed-4e4f-b69a-de1dd6828a04] 2026-01-03 00:04:20.065790 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-03 00:04:30.065894 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-03 00:04:30.370821 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=4ab452be-767c-4090-9a02-f3de67c48d67] 2026-01-03 00:04:33.145624 | orchestrator | 2026-01-03 00:04:33.145722 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-03 00:04:33.145847 | orchestrator | 2026-01-03 00:04:33.145865 | orchestrator | Outputs: 2026-01-03 00:04:33.145878 | orchestrator | 2026-01-03 00:04:33.145926 | orchestrator | manager_address = 2026-01-03 00:04:33.145941 | orchestrator | private_key = 2026-01-03 00:04:33.413989 | orchestrator | ok: Runtime: 0:01:56.731958 2026-01-03 00:04:33.451790 | 2026-01-03 00:04:33.451970 | TASK [Create infrastructure (stable)] 2026-01-03 00:04:33.990830 | orchestrator | skipping: Conditional result was False 2026-01-03 00:04:34.007731 | 2026-01-03 00:04:34.007908 | TASK [Fetch manager address] 2026-01-03 00:04:34.565616 | orchestrator | ok 2026-01-03 00:04:34.574659 | 2026-01-03 00:04:34.574792 | TASK [Set manager_host address] 2026-01-03 00:04:34.665550 | orchestrator | ok 2026-01-03 00:04:34.676882 | 2026-01-03 00:04:34.677024 | LOOP [Update ansible collections] 2026-01-03 00:04:36.098760 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-03 00:04:36.099210 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:04:36.099653 | orchestrator | Starting galaxy collection install process 2026-01-03 00:04:36.099709 | orchestrator | Process install dependency map 2026-01-03 00:04:36.099749 | orchestrator | Starting collection install process 2026-01-03 00:04:36.099785 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-01-03 00:04:36.099825 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-01-03 00:04:36.099876 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-03 00:04:36.099961 | orchestrator | ok: Item: commons Runtime: 0:00:01.061096 2026-01-03 00:04:37.554922 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-03 00:04:37.555092 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:04:37.555199 | orchestrator | Starting galaxy collection install process 2026-01-03 00:04:37.555243 | orchestrator | Process install dependency map 2026-01-03 00:04:37.555280 | orchestrator | Starting collection install process 2026-01-03 00:04:37.555315 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-01-03 00:04:37.555368 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-01-03 00:04:37.555404 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-03 00:04:37.555462 | orchestrator | ok: Item: services Runtime: 0:00:01.125442 2026-01-03 00:04:37.578725 | 2026-01-03 00:04:37.578945 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-03 00:04:48.171510 | orchestrator | ok 2026-01-03 00:04:48.183165 | 2026-01-03 00:04:48.183302 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-03 00:05:48.236411 | orchestrator | ok 2026-01-03 00:05:48.246018 | 2026-01-03 00:05:48.246186 | TASK [Fetch manager ssh hostkey] 2026-01-03 00:05:49.827695 | orchestrator | Output suppressed because no_log was given 2026-01-03 00:05:49.845539 | 2026-01-03 00:05:49.845728 | TASK [Get ssh keypair from terraform environment] 2026-01-03 00:05:50.384383 | orchestrator | ok: Runtime: 0:00:00.007614 2026-01-03 00:05:50.400025 | 2026-01-03 00:05:50.400226 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-03 00:05:50.432353 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-03 00:05:50.440045 | 2026-01-03 00:05:50.440231 | TASK [Run manager part 0] 2026-01-03 00:05:51.623489 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:05:51.692007 | orchestrator | 2026-01-03 00:05:51.692076 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-03 00:05:51.692084 | orchestrator | 2026-01-03 00:05:51.692102 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-03 00:05:53.514740 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:53.514836 | orchestrator | 2026-01-03 00:05:53.514866 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-03 00:05:53.514880 | orchestrator | 2026-01-03 00:05:53.514893 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:05:55.574307 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:55.574344 | orchestrator | 2026-01-03 00:05:55.574352 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-03 00:05:56.155378 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:56.155409 | orchestrator | 2026-01-03 00:05:56.155415 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-03 00:05:56.191820 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.191854 | orchestrator | 2026-01-03 00:05:56.191865 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-03 00:05:56.215078 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.215106 | orchestrator | 2026-01-03 00:05:56.215113 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-03 00:05:56.238908 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.238947 | orchestrator | 2026-01-03 00:05:56.238955 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-03 00:05:56.263291 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.263320 | orchestrator | 2026-01-03 00:05:56.263325 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-03 00:05:56.288324 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.288354 | orchestrator | 2026-01-03 00:05:56.288360 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-03 00:05:56.314654 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.314713 | orchestrator | 2026-01-03 00:05:56.314721 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-03 00:05:56.343288 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:56.343318 | orchestrator | 2026-01-03 00:05:56.343325 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-03 00:05:57.026300 | orchestrator | changed: [testbed-manager] 2026-01-03 00:05:57.026334 | orchestrator | 2026-01-03 00:05:57.026341 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-03 00:08:26.122041 | orchestrator | changed: [testbed-manager] 2026-01-03 00:08:26.123627 | orchestrator | 2026-01-03 00:08:26.123660 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-03 00:10:00.866275 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:00.866349 | orchestrator | 2026-01-03 00:10:00.866365 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-03 00:10:22.662877 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:22.662921 | orchestrator | 2026-01-03 00:10:22.662931 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-03 00:10:31.089666 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:31.089763 | orchestrator | 2026-01-03 00:10:31.089781 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-03 00:10:31.140738 | orchestrator | ok: [testbed-manager] 2026-01-03 00:10:31.140811 | orchestrator | 2026-01-03 00:10:31.140826 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-03 00:10:31.933809 | orchestrator | ok: [testbed-manager] 2026-01-03 00:10:31.933887 | orchestrator | 2026-01-03 00:10:31.933897 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-03 00:10:32.655736 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:32.655821 | orchestrator | 2026-01-03 00:10:32.655840 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-03 00:10:38.190928 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:38.190985 | orchestrator | 2026-01-03 00:10:38.191007 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-03 00:10:43.767894 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:43.767955 | orchestrator | 2026-01-03 00:10:43.767970 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-03 00:10:45.993784 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:45.993830 | orchestrator | 2026-01-03 00:10:45.993840 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-03 00:10:47.577782 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:47.577828 | orchestrator | 2026-01-03 00:10:47.577837 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-03 00:10:48.590283 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-03 00:10:48.590388 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-03 00:10:48.590405 | orchestrator | 2026-01-03 00:10:48.590418 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-03 00:10:48.634981 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-03 00:10:48.635062 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-03 00:10:48.635078 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-03 00:10:48.635091 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-03 00:10:56.988455 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-03 00:10:56.988548 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-03 00:10:56.988563 | orchestrator | 2026-01-03 00:10:56.988576 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-03 00:10:57.549188 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:57.549279 | orchestrator | 2026-01-03 00:10:57.549295 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-03 00:12:16.278519 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-03 00:12:16.278590 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-03 00:12:16.278602 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-03 00:12:16.278611 | orchestrator | 2026-01-03 00:12:16.278620 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-03 00:12:18.590173 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-03 00:12:18.590266 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-03 00:12:18.590283 | orchestrator | 2026-01-03 00:12:18.590296 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-03 00:12:18.590308 | orchestrator | 2026-01-03 00:12:18.590320 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:12:19.960519 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:19.960626 | orchestrator | 2026-01-03 00:12:19.960646 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-03 00:12:20.011895 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:20.011934 | orchestrator | 2026-01-03 00:12:20.011942 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-03 00:12:20.077177 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:20.077225 | orchestrator | 2026-01-03 00:12:20.077237 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-03 00:12:20.820221 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:20.821038 | orchestrator | 2026-01-03 00:12:20.821093 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-03 00:12:21.530087 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:21.530157 | orchestrator | 2026-01-03 00:12:21.530174 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-03 00:12:22.884932 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-03 00:12:22.884986 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-03 00:12:22.884994 | orchestrator | 2026-01-03 00:12:22.885012 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-03 00:12:24.318141 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:24.318243 | orchestrator | 2026-01-03 00:12:24.318259 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-03 00:12:26.106658 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:12:26.107402 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-03 00:12:26.107431 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:12:26.107443 | orchestrator | 2026-01-03 00:12:26.107454 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-03 00:12:26.172867 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:26.172931 | orchestrator | 2026-01-03 00:12:26.172947 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-03 00:12:26.252980 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:26.253040 | orchestrator | 2026-01-03 00:12:26.253088 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-03 00:12:26.789335 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:26.789393 | orchestrator | 2026-01-03 00:12:26.789408 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-03 00:12:26.869378 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:26.869443 | orchestrator | 2026-01-03 00:12:26.869459 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-03 00:12:27.735590 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:12:27.735653 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:27.735695 | orchestrator | 2026-01-03 00:12:27.735708 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-03 00:12:27.774428 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:27.774498 | orchestrator | 2026-01-03 00:12:27.774514 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-03 00:12:27.810825 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:27.810884 | orchestrator | 2026-01-03 00:12:27.810900 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-03 00:12:27.838948 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:27.839006 | orchestrator | 2026-01-03 00:12:27.839024 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-03 00:12:27.904858 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:27.904897 | orchestrator | 2026-01-03 00:12:27.904905 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-03 00:12:28.608318 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:28.608385 | orchestrator | 2026-01-03 00:12:28.608401 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-03 00:12:28.608414 | orchestrator | 2026-01-03 00:12:28.608439 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:12:29.998321 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:29.998389 | orchestrator | 2026-01-03 00:12:29.998405 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-03 00:12:30.949161 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:30.949195 | orchestrator | 2026-01-03 00:12:30.949201 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:12:30.949206 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-03 00:12:30.949211 | orchestrator | 2026-01-03 00:12:31.227084 | orchestrator | ok: Runtime: 0:06:40.208573 2026-01-03 00:12:31.247279 | 2026-01-03 00:12:31.247480 | TASK [Point out that the log in on the manager is now possible] 2026-01-03 00:12:31.288650 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-03 00:12:31.299644 | 2026-01-03 00:12:31.299805 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-03 00:12:31.345644 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-03 00:12:31.355228 | 2026-01-03 00:12:31.355362 | TASK [Run manager part 1 + 2] 2026-01-03 00:12:32.826436 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:12:32.894406 | orchestrator | 2026-01-03 00:12:32.894461 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-03 00:12:32.894471 | orchestrator | 2026-01-03 00:12:32.894487 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:12:35.736083 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:35.736138 | orchestrator | 2026-01-03 00:12:35.736166 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-03 00:12:35.776259 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:35.776398 | orchestrator | 2026-01-03 00:12:35.776409 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-03 00:12:35.822008 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:35.822091 | orchestrator | 2026-01-03 00:12:35.822099 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:12:35.854879 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:35.854921 | orchestrator | 2026-01-03 00:12:35.854928 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:12:35.913735 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:35.913867 | orchestrator | 2026-01-03 00:12:35.913876 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:12:35.981996 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:35.982476 | orchestrator | 2026-01-03 00:12:35.982494 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:12:36.042604 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-03 00:12:36.042651 | orchestrator | 2026-01-03 00:12:36.042657 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:12:36.741935 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:36.741993 | orchestrator | 2026-01-03 00:12:36.742000 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:12:36.781824 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:36.781872 | orchestrator | 2026-01-03 00:12:36.781878 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:12:38.122326 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:38.122440 | orchestrator | 2026-01-03 00:12:38.122458 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:12:38.700819 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:38.700910 | orchestrator | 2026-01-03 00:12:38.700927 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:12:39.839229 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:39.839359 | orchestrator | 2026-01-03 00:12:39.839373 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:12:54.891054 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:54.891106 | orchestrator | 2026-01-03 00:12:54.891116 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-03 00:12:55.567220 | orchestrator | ok: [testbed-manager] 2026-01-03 00:12:55.567309 | orchestrator | 2026-01-03 00:12:55.567327 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-03 00:12:55.626618 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:12:55.626707 | orchestrator | 2026-01-03 00:12:55.626723 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-03 00:12:56.578222 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:56.578315 | orchestrator | 2026-01-03 00:12:56.578333 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-03 00:12:57.535892 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:57.535979 | orchestrator | 2026-01-03 00:12:57.535994 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-03 00:12:58.095729 | orchestrator | changed: [testbed-manager] 2026-01-03 00:12:58.095770 | orchestrator | 2026-01-03 00:12:58.095778 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-03 00:12:58.137690 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-03 00:12:58.137828 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-03 00:12:58.137853 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-03 00:12:58.137872 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-03 00:13:00.531827 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:00.531917 | orchestrator | 2026-01-03 00:13:00.531931 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-03 00:13:09.198965 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-03 00:13:09.199323 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-03 00:13:09.199357 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-03 00:13:09.199391 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-03 00:13:09.199416 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-03 00:13:09.199432 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-03 00:13:09.199446 | orchestrator | 2026-01-03 00:13:09.199463 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-03 00:13:10.197393 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:10.197482 | orchestrator | 2026-01-03 00:13:10.197501 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-03 00:13:10.232207 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:10.232291 | orchestrator | 2026-01-03 00:13:10.232305 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-03 00:13:13.294131 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:13.294228 | orchestrator | 2026-01-03 00:13:13.294249 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-03 00:13:13.337544 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:13.337607 | orchestrator | 2026-01-03 00:13:13.337614 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-03 00:14:46.145005 | orchestrator | changed: [testbed-manager] 2026-01-03 00:14:46.145055 | orchestrator | 2026-01-03 00:14:46.145064 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:14:47.216662 | orchestrator | ok: [testbed-manager] 2026-01-03 00:14:47.216714 | orchestrator | 2026-01-03 00:14:47.216721 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:14:47.216727 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-03 00:14:47.216731 | orchestrator | 2026-01-03 00:14:47.491727 | orchestrator | ok: Runtime: 0:02:15.639044 2026-01-03 00:14:47.509509 | 2026-01-03 00:14:47.509649 | TASK [Reboot manager] 2026-01-03 00:14:49.046407 | orchestrator | ok: Runtime: 0:00:00.939779 2026-01-03 00:14:49.067864 | 2026-01-03 00:14:49.068076 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-03 00:15:05.518683 | orchestrator | ok 2026-01-03 00:15:05.529337 | 2026-01-03 00:15:05.529504 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-03 00:16:05.582967 | orchestrator | ok 2026-01-03 00:16:05.593500 | 2026-01-03 00:16:05.593644 | TASK [Deploy manager + bootstrap nodes] 2026-01-03 00:16:08.074461 | orchestrator | 2026-01-03 00:16:08.074646 | orchestrator | # DEPLOY MANAGER 2026-01-03 00:16:08.074670 | orchestrator | 2026-01-03 00:16:08.074686 | orchestrator | + set -e 2026-01-03 00:16:08.074699 | orchestrator | + echo 2026-01-03 00:16:08.074713 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-03 00:16:08.074731 | orchestrator | + echo 2026-01-03 00:16:08.074778 | orchestrator | + cat /opt/manager-vars.sh 2026-01-03 00:16:08.078087 | orchestrator | export NUMBER_OF_NODES=6 2026-01-03 00:16:08.078128 | orchestrator | 2026-01-03 00:16:08.078144 | orchestrator | export CEPH_VERSION=reef 2026-01-03 00:16:08.078162 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-03 00:16:08.078177 | orchestrator | export MANAGER_VERSION=latest 2026-01-03 00:16:08.078201 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-03 00:16:08.078212 | orchestrator | 2026-01-03 00:16:08.078231 | orchestrator | export ARA=false 2026-01-03 00:16:08.078242 | orchestrator | export DEPLOY_MODE=manager 2026-01-03 00:16:08.078260 | orchestrator | export TEMPEST=true 2026-01-03 00:16:08.078275 | orchestrator | export IS_ZUUL=true 2026-01-03 00:16:08.078294 | orchestrator | 2026-01-03 00:16:08.078321 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:16:08.078341 | orchestrator | export EXTERNAL_API=false 2026-01-03 00:16:08.078362 | orchestrator | 2026-01-03 00:16:08.078382 | orchestrator | export IMAGE_USER=ubuntu 2026-01-03 00:16:08.078418 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-03 00:16:08.078437 | orchestrator | 2026-01-03 00:16:08.078456 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-03 00:16:08.078488 | orchestrator | 2026-01-03 00:16:08.078508 | orchestrator | + echo 2026-01-03 00:16:08.078530 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:16:08.079300 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:16:08.079327 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:16:08.079341 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:16:08.079356 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:16:08.079618 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:16:08.079759 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:16:08.079813 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:16:08.079837 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:16:08.079952 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:16:08.079977 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:16:08.079997 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:16:08.080017 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:16:08.080047 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:16:08.080066 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-03 00:16:08.080093 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-03 00:16:08.080109 | orchestrator | ++ export ARA=false 2026-01-03 00:16:08.080128 | orchestrator | ++ ARA=false 2026-01-03 00:16:08.080146 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:16:08.080164 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:16:08.080183 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:16:08.080225 | orchestrator | ++ TEMPEST=true 2026-01-03 00:16:08.080251 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:16:08.080264 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:16:08.080275 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:16:08.080289 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:16:08.080308 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:16:08.080327 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:16:08.080345 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:16:08.080364 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:16:08.080396 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:16:08.080416 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:16:08.080435 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:16:08.080453 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:16:08.080476 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-03 00:16:08.140511 | orchestrator | + docker version 2026-01-03 00:16:08.385020 | orchestrator | Client: Docker Engine - Community 2026-01-03 00:16:08.385128 | orchestrator | Version: 27.5.1 2026-01-03 00:16:08.385145 | orchestrator | API version: 1.47 2026-01-03 00:16:08.385159 | orchestrator | Go version: go1.22.11 2026-01-03 00:16:08.385171 | orchestrator | Git commit: 9f9e405 2026-01-03 00:16:08.385183 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-03 00:16:08.385195 | orchestrator | OS/Arch: linux/amd64 2026-01-03 00:16:08.385213 | orchestrator | Context: default 2026-01-03 00:16:08.385225 | orchestrator | 2026-01-03 00:16:08.385237 | orchestrator | Server: Docker Engine - Community 2026-01-03 00:16:08.385249 | orchestrator | Engine: 2026-01-03 00:16:08.385279 | orchestrator | Version: 27.5.1 2026-01-03 00:16:08.385293 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-03 00:16:08.385346 | orchestrator | Go version: go1.22.11 2026-01-03 00:16:08.385367 | orchestrator | Git commit: 4c9b3b0 2026-01-03 00:16:08.385387 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-03 00:16:08.385406 | orchestrator | OS/Arch: linux/amd64 2026-01-03 00:16:08.385425 | orchestrator | Experimental: false 2026-01-03 00:16:08.385444 | orchestrator | containerd: 2026-01-03 00:16:08.385464 | orchestrator | Version: v2.2.1 2026-01-03 00:16:08.385484 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-03 00:16:08.385514 | orchestrator | runc: 2026-01-03 00:16:08.385526 | orchestrator | Version: 1.3.4 2026-01-03 00:16:08.385545 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-03 00:16:08.385557 | orchestrator | docker-init: 2026-01-03 00:16:08.385568 | orchestrator | Version: 0.19.0 2026-01-03 00:16:08.385580 | orchestrator | GitCommit: de40ad0 2026-01-03 00:16:08.389205 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-03 00:16:08.399255 | orchestrator | + set -e 2026-01-03 00:16:08.399299 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:16:08.399319 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:16:08.399331 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:16:08.399349 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:16:08.399367 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:16:08.399384 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:16:08.399401 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:16:08.399427 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:16:08.399443 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:16:08.399453 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-03 00:16:08.399463 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-03 00:16:08.399472 | orchestrator | ++ export ARA=false 2026-01-03 00:16:08.399482 | orchestrator | ++ ARA=false 2026-01-03 00:16:08.399499 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:16:08.399510 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:16:08.399519 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:16:08.399529 | orchestrator | ++ TEMPEST=true 2026-01-03 00:16:08.399538 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:16:08.399548 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:16:08.399557 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:16:08.399567 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:16:08.399577 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:16:08.399586 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:16:08.399596 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:16:08.399605 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:16:08.399614 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:16:08.399624 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:16:08.399634 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:16:08.399643 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:16:08.399653 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:16:08.399662 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:16:08.399672 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:16:08.399681 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:16:08.399694 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:16:08.399840 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:16:08.399856 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:16:08.399866 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-03 00:16:08.407037 | orchestrator | + set -e 2026-01-03 00:16:08.407073 | orchestrator | + VERSION=reef 2026-01-03 00:16:08.408281 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:16:08.413237 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-03 00:16:08.413273 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:16:08.419049 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-03 00:16:08.426356 | orchestrator | + set -e 2026-01-03 00:16:08.426863 | orchestrator | + VERSION=2025.1 2026-01-03 00:16:08.427399 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:16:08.431435 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-03 00:16:08.431482 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:16:08.436214 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-03 00:16:08.437008 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:16:08.499928 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:16:08.500079 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:16:08.500120 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-03 00:16:08.501083 | orchestrator | ++ semver latest 10.0.0-0 2026-01-03 00:16:08.562014 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:16:08.562938 | orchestrator | ++ semver 2025.1 2025.1 2026-01-03 00:16:08.642990 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-03 00:16:08.643079 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-03 00:16:08.649871 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-03 00:16:08.654661 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-03 00:16:08.745978 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-03 00:16:08.746995 | orchestrator | + source /opt/venv/bin/activate 2026-01-03 00:16:08.748153 | orchestrator | ++ deactivate nondestructive 2026-01-03 00:16:08.748204 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:16:08.748224 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:16:08.748244 | orchestrator | ++ hash -r 2026-01-03 00:16:08.748383 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:16:08.748401 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-03 00:16:08.748412 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-03 00:16:08.748423 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-03 00:16:08.748669 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-03 00:16:08.748695 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-03 00:16:08.748716 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-03 00:16:08.748745 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-03 00:16:08.748764 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:16:08.748856 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:16:08.748887 | orchestrator | ++ export PATH 2026-01-03 00:16:08.748913 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:16:08.748940 | orchestrator | ++ '[' -z '' ']' 2026-01-03 00:16:08.748959 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-03 00:16:08.748978 | orchestrator | ++ PS1='(venv) ' 2026-01-03 00:16:08.748997 | orchestrator | ++ export PS1 2026-01-03 00:16:08.749016 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-03 00:16:08.749049 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-03 00:16:08.749068 | orchestrator | ++ hash -r 2026-01-03 00:16:08.749454 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-03 00:16:09.878575 | orchestrator | 2026-01-03 00:16:09.878679 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-03 00:16:09.878695 | orchestrator | 2026-01-03 00:16:09.878707 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:16:10.428995 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:10.429104 | orchestrator | 2026-01-03 00:16:10.429121 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-03 00:16:11.370935 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:11.371039 | orchestrator | 2026-01-03 00:16:11.371058 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-03 00:16:11.371070 | orchestrator | 2026-01-03 00:16:11.371082 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:16:14.606146 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:14.606260 | orchestrator | 2026-01-03 00:16:14.606278 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-03 00:16:14.659562 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:14.659668 | orchestrator | 2026-01-03 00:16:14.659687 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-03 00:16:15.102870 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:15.102971 | orchestrator | 2026-01-03 00:16:15.102989 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-03 00:16:15.136861 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:15.136951 | orchestrator | 2026-01-03 00:16:15.136964 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-03 00:16:15.464876 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:15.465011 | orchestrator | 2026-01-03 00:16:15.465028 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-03 00:16:15.516704 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:15.516776 | orchestrator | 2026-01-03 00:16:15.516839 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-03 00:16:15.835013 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:15.835097 | orchestrator | 2026-01-03 00:16:15.835109 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-03 00:16:15.960548 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:15.960647 | orchestrator | 2026-01-03 00:16:15.960662 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-03 00:16:15.960676 | orchestrator | 2026-01-03 00:16:15.960688 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:16:17.651885 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:17.651977 | orchestrator | 2026-01-03 00:16:17.651992 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-03 00:16:17.761896 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-03 00:16:17.761992 | orchestrator | 2026-01-03 00:16:17.762008 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-03 00:16:17.813470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-03 00:16:17.813593 | orchestrator | 2026-01-03 00:16:17.813610 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-03 00:16:18.885519 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-03 00:16:18.885617 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-03 00:16:18.885633 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-03 00:16:18.885645 | orchestrator | 2026-01-03 00:16:18.885658 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-03 00:16:20.639601 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-03 00:16:20.639693 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-03 00:16:20.639710 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-03 00:16:20.639723 | orchestrator | 2026-01-03 00:16:20.639736 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-03 00:16:21.259554 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:16:21.259662 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:21.259679 | orchestrator | 2026-01-03 00:16:21.259692 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-03 00:16:21.908556 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:16:21.908663 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:21.908680 | orchestrator | 2026-01-03 00:16:21.908693 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-03 00:16:21.970435 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:21.970526 | orchestrator | 2026-01-03 00:16:21.970542 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-03 00:16:22.296524 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:22.296615 | orchestrator | 2026-01-03 00:16:22.296631 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-03 00:16:22.371432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-03 00:16:22.371523 | orchestrator | 2026-01-03 00:16:22.371539 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-03 00:16:23.452391 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:23.452454 | orchestrator | 2026-01-03 00:16:23.452461 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-03 00:16:24.248974 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:24.249066 | orchestrator | 2026-01-03 00:16:24.249088 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-03 00:16:42.678395 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:42.678513 | orchestrator | 2026-01-03 00:16:42.678531 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-03 00:16:42.726382 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:42.726472 | orchestrator | 2026-01-03 00:16:42.726486 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-03 00:16:42.726498 | orchestrator | 2026-01-03 00:16:42.726510 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:16:44.514734 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:44.514844 | orchestrator | 2026-01-03 00:16:44.514855 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-03 00:16:44.622590 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-03 00:16:44.622681 | orchestrator | 2026-01-03 00:16:44.622698 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-03 00:16:44.680728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:16:44.680869 | orchestrator | 2026-01-03 00:16:44.680899 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-03 00:16:47.124750 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:47.124877 | orchestrator | 2026-01-03 00:16:47.124895 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-03 00:16:47.171707 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:47.171819 | orchestrator | 2026-01-03 00:16:47.171835 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-03 00:16:47.294800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-03 00:16:47.294895 | orchestrator | 2026-01-03 00:16:47.294910 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-03 00:16:50.137291 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-03 00:16:50.137393 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-03 00:16:50.137407 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-03 00:16:50.137418 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-03 00:16:50.137428 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-03 00:16:50.137438 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-03 00:16:50.137448 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-03 00:16:50.137457 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-03 00:16:50.137468 | orchestrator | 2026-01-03 00:16:50.137479 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-03 00:16:50.753826 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:50.753900 | orchestrator | 2026-01-03 00:16:50.753908 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-03 00:16:51.389666 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:51.389832 | orchestrator | 2026-01-03 00:16:51.389862 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-03 00:16:51.473850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-03 00:16:51.473926 | orchestrator | 2026-01-03 00:16:51.473935 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-03 00:16:52.664533 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-03 00:16:52.664623 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-03 00:16:52.664637 | orchestrator | 2026-01-03 00:16:52.664648 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-03 00:16:53.264363 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:53.264464 | orchestrator | 2026-01-03 00:16:53.264483 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-03 00:16:53.319603 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:53.319694 | orchestrator | 2026-01-03 00:16:53.319710 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-03 00:16:53.404866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-03 00:16:53.405001 | orchestrator | 2026-01-03 00:16:53.405018 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-03 00:16:53.996838 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:53.996950 | orchestrator | 2026-01-03 00:16:53.996967 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-03 00:16:54.056971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-03 00:16:54.057060 | orchestrator | 2026-01-03 00:16:54.057081 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-03 00:16:55.413729 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:16:55.413850 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:16:55.413867 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:55.413880 | orchestrator | 2026-01-03 00:16:55.413892 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-03 00:16:56.053368 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:56.053516 | orchestrator | 2026-01-03 00:16:56.053544 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-03 00:16:56.104891 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:56.105002 | orchestrator | 2026-01-03 00:16:56.105054 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-03 00:16:56.180121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-03 00:16:56.180212 | orchestrator | 2026-01-03 00:16:56.180235 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-03 00:16:56.703701 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:56.703800 | orchestrator | 2026-01-03 00:16:56.703815 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-03 00:16:57.108922 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:57.109016 | orchestrator | 2026-01-03 00:16:57.109034 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-03 00:16:58.356143 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-03 00:16:58.356252 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-03 00:16:58.356268 | orchestrator | 2026-01-03 00:16:58.356982 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-03 00:16:58.977659 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:58.977753 | orchestrator | 2026-01-03 00:16:58.977799 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-03 00:16:59.343521 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:59.343615 | orchestrator | 2026-01-03 00:16:59.343632 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-03 00:16:59.707513 | orchestrator | changed: [testbed-manager] 2026-01-03 00:16:59.707601 | orchestrator | 2026-01-03 00:16:59.707618 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-03 00:16:59.759055 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:16:59.759142 | orchestrator | 2026-01-03 00:16:59.759159 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-03 00:16:59.824527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-03 00:16:59.824619 | orchestrator | 2026-01-03 00:16:59.824637 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-03 00:16:59.865963 | orchestrator | ok: [testbed-manager] 2026-01-03 00:16:59.866103 | orchestrator | 2026-01-03 00:16:59.866121 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-03 00:17:01.795525 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-03 00:17:01.795635 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-03 00:17:01.795650 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-03 00:17:01.795660 | orchestrator | 2026-01-03 00:17:01.795673 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-03 00:17:02.484661 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:02.484807 | orchestrator | 2026-01-03 00:17:02.484824 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-03 00:17:03.161151 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:03.161255 | orchestrator | 2026-01-03 00:17:03.161271 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-03 00:17:03.842499 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:03.842573 | orchestrator | 2026-01-03 00:17:03.842580 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-03 00:17:03.912868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-03 00:17:03.912921 | orchestrator | 2026-01-03 00:17:03.912927 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-03 00:17:03.951546 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:03.951581 | orchestrator | 2026-01-03 00:17:03.951586 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-03 00:17:04.658962 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-03 00:17:04.659024 | orchestrator | 2026-01-03 00:17:04.659031 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-03 00:17:04.742672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-03 00:17:04.742731 | orchestrator | 2026-01-03 00:17:04.742737 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-03 00:17:05.450433 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:05.450495 | orchestrator | 2026-01-03 00:17:05.450501 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-03 00:17:06.035829 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:06.035913 | orchestrator | 2026-01-03 00:17:06.035924 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-03 00:17:06.094308 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:06.094395 | orchestrator | 2026-01-03 00:17:06.094410 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-03 00:17:06.152146 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:06.152192 | orchestrator | 2026-01-03 00:17:06.152198 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-03 00:17:06.995027 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:06.995103 | orchestrator | 2026-01-03 00:17:06.995112 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-03 00:18:15.934502 | orchestrator | changed: [testbed-manager] 2026-01-03 00:18:15.934621 | orchestrator | 2026-01-03 00:18:15.934639 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-03 00:18:16.926241 | orchestrator | ok: [testbed-manager] 2026-01-03 00:18:16.926352 | orchestrator | 2026-01-03 00:18:16.926404 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-03 00:18:16.991263 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:18:16.991356 | orchestrator | 2026-01-03 00:18:16.991371 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-03 00:18:19.355857 | orchestrator | changed: [testbed-manager] 2026-01-03 00:18:19.355958 | orchestrator | 2026-01-03 00:18:19.355975 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-03 00:18:19.413411 | orchestrator | ok: [testbed-manager] 2026-01-03 00:18:19.413503 | orchestrator | 2026-01-03 00:18:19.413518 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-03 00:18:19.413532 | orchestrator | 2026-01-03 00:18:19.413544 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-03 00:18:19.470432 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:18:19.470524 | orchestrator | 2026-01-03 00:18:19.470540 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-03 00:19:19.533419 | orchestrator | Pausing for 60 seconds 2026-01-03 00:19:19.533549 | orchestrator | changed: [testbed-manager] 2026-01-03 00:19:19.533567 | orchestrator | 2026-01-03 00:19:19.534404 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-03 00:19:23.103147 | orchestrator | changed: [testbed-manager] 2026-01-03 00:19:23.103248 | orchestrator | 2026-01-03 00:19:23.103264 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-03 00:20:25.159187 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-03 00:20:25.159377 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-03 00:20:25.159395 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-03 00:20:25.159407 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:25.159420 | orchestrator | 2026-01-03 00:20:25.159432 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-03 00:20:35.309539 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:35.309641 | orchestrator | 2026-01-03 00:20:35.309656 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-03 00:20:35.400098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-03 00:20:35.400164 | orchestrator | 2026-01-03 00:20:35.400170 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-03 00:20:35.400175 | orchestrator | 2026-01-03 00:20:35.400180 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-03 00:20:35.450116 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:20:35.450203 | orchestrator | 2026-01-03 00:20:35.450218 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-03 00:20:35.511436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-03 00:20:35.511525 | orchestrator | 2026-01-03 00:20:35.511539 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-03 00:20:36.274407 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:36.274507 | orchestrator | 2026-01-03 00:20:36.274527 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-03 00:20:39.388670 | orchestrator | ok: [testbed-manager] 2026-01-03 00:20:39.388815 | orchestrator | 2026-01-03 00:20:39.388831 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-03 00:20:39.465360 | orchestrator | ok: [testbed-manager] => { 2026-01-03 00:20:39.465452 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-03 00:20:39.465467 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-03 00:20:39.465480 | orchestrator | "Checking running containers against expected versions...", 2026-01-03 00:20:39.465493 | orchestrator | "", 2026-01-03 00:20:39.465505 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-03 00:20:39.465517 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-03 00:20:39.465529 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465540 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-03 00:20:39.465552 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.465564 | orchestrator | "", 2026-01-03 00:20:39.465576 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-03 00:20:39.465589 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-03 00:20:39.465600 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465612 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-03 00:20:39.465623 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.465635 | orchestrator | "", 2026-01-03 00:20:39.465646 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-03 00:20:39.465658 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-03 00:20:39.465670 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465682 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-03 00:20:39.465693 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.465769 | orchestrator | "", 2026-01-03 00:20:39.465780 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-03 00:20:39.465816 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-03 00:20:39.465828 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465839 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-03 00:20:39.465849 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.465860 | orchestrator | "", 2026-01-03 00:20:39.465871 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-03 00:20:39.465881 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-03 00:20:39.465892 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465905 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-03 00:20:39.465918 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.465930 | orchestrator | "", 2026-01-03 00:20:39.465942 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-03 00:20:39.465955 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.465969 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.465981 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.465993 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466004 | orchestrator | "", 2026-01-03 00:20:39.466149 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-03 00:20:39.466164 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-03 00:20:39.466175 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466198 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-03 00:20:39.466209 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466220 | orchestrator | "", 2026-01-03 00:20:39.466231 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-03 00:20:39.466241 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-03 00:20:39.466257 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466269 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-03 00:20:39.466280 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466290 | orchestrator | "", 2026-01-03 00:20:39.466301 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-03 00:20:39.466312 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-03 00:20:39.466322 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466333 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-03 00:20:39.466344 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466354 | orchestrator | "", 2026-01-03 00:20:39.466365 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-03 00:20:39.466376 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-03 00:20:39.466386 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466397 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-03 00:20:39.466408 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466418 | orchestrator | "", 2026-01-03 00:20:39.466429 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-03 00:20:39.466439 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466450 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466461 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466471 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466482 | orchestrator | "", 2026-01-03 00:20:39.466493 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-03 00:20:39.466503 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466514 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466525 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466535 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466546 | orchestrator | "", 2026-01-03 00:20:39.466557 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-03 00:20:39.466567 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466588 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466598 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466609 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466620 | orchestrator | "", 2026-01-03 00:20:39.466630 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-03 00:20:39.466641 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466652 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466662 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466673 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466684 | orchestrator | "", 2026-01-03 00:20:39.466719 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-03 00:20:39.466750 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466761 | orchestrator | " Enabled: true", 2026-01-03 00:20:39.466772 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:20:39.466783 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:20:39.466794 | orchestrator | "", 2026-01-03 00:20:39.466804 | orchestrator | "=== Summary ===", 2026-01-03 00:20:39.466815 | orchestrator | "Errors (version mismatches): 0", 2026-01-03 00:20:39.466825 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-03 00:20:39.466836 | orchestrator | "", 2026-01-03 00:20:39.466847 | orchestrator | "✅ All running containers match expected versions!" 2026-01-03 00:20:39.466858 | orchestrator | ] 2026-01-03 00:20:39.466869 | orchestrator | } 2026-01-03 00:20:39.466939 | orchestrator | 2026-01-03 00:20:39.466953 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-03 00:20:39.510222 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:20:39.510273 | orchestrator | 2026-01-03 00:20:39.510285 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:20:39.510298 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-03 00:20:39.510310 | orchestrator | 2026-01-03 00:20:39.611441 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-03 00:20:39.611516 | orchestrator | + deactivate 2026-01-03 00:20:39.611529 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-03 00:20:39.611543 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:20:39.611554 | orchestrator | + export PATH 2026-01-03 00:20:39.611565 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-03 00:20:39.611577 | orchestrator | + '[' -n '' ']' 2026-01-03 00:20:39.611588 | orchestrator | + hash -r 2026-01-03 00:20:39.611599 | orchestrator | + '[' -n '' ']' 2026-01-03 00:20:39.611611 | orchestrator | + unset VIRTUAL_ENV 2026-01-03 00:20:39.611622 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-03 00:20:39.611633 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-03 00:20:39.611644 | orchestrator | + unset -f deactivate 2026-01-03 00:20:39.611655 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-03 00:20:39.619803 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-03 00:20:39.619844 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-03 00:20:39.619855 | orchestrator | + local max_attempts=60 2026-01-03 00:20:39.619867 | orchestrator | + local name=ceph-ansible 2026-01-03 00:20:39.619878 | orchestrator | + local attempt_num=1 2026-01-03 00:20:39.620520 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:20:39.651019 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:20:39.651098 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-03 00:20:39.651113 | orchestrator | + local max_attempts=60 2026-01-03 00:20:39.651124 | orchestrator | + local name=kolla-ansible 2026-01-03 00:20:39.651134 | orchestrator | + local attempt_num=1 2026-01-03 00:20:39.651475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-03 00:20:39.686513 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:20:39.686569 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-03 00:20:39.686583 | orchestrator | + local max_attempts=60 2026-01-03 00:20:39.686595 | orchestrator | + local name=osism-ansible 2026-01-03 00:20:39.686606 | orchestrator | + local attempt_num=1 2026-01-03 00:20:39.687494 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-03 00:20:39.728077 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:20:39.728178 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-03 00:20:39.728200 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-03 00:20:40.424661 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-03 00:20:40.610599 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-03 00:20:40.610738 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-03 00:20:40.610756 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-03 00:20:40.610769 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-03 00:20:40.610782 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-03 00:20:40.610794 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-03 00:20:40.610870 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-03 00:20:40.610886 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-03 00:20:40.610898 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-03 00:20:40.610909 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-03 00:20:40.611057 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-03 00:20:40.611072 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-03 00:20:40.611083 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-03 00:20:40.611094 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-03 00:20:40.611106 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-03 00:20:40.611117 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-03 00:20:40.616872 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:20:40.661290 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:20:40.661383 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:20:40.661426 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-03 00:20:40.666451 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-03 00:20:52.947357 | orchestrator | 2026-01-03 00:20:52 | INFO  | Task c8629b26-12f5-4fdf-b878-3c09bef623b3 (resolvconf) was prepared for execution. 2026-01-03 00:20:52.947459 | orchestrator | 2026-01-03 00:20:52 | INFO  | It takes a moment until task c8629b26-12f5-4fdf-b878-3c09bef623b3 (resolvconf) has been started and output is visible here. 2026-01-03 00:21:06.582143 | orchestrator | 2026-01-03 00:21:06.582288 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-03 00:21:06.582319 | orchestrator | 2026-01-03 00:21:06.582340 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:21:06.582361 | orchestrator | Saturday 03 January 2026 00:20:57 +0000 (0:00:00.152) 0:00:00.152 ****** 2026-01-03 00:21:06.582381 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:06.582402 | orchestrator | 2026-01-03 00:21:06.582422 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-03 00:21:06.582444 | orchestrator | Saturday 03 January 2026 00:21:00 +0000 (0:00:03.675) 0:00:03.827 ****** 2026-01-03 00:21:06.582465 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:06.582483 | orchestrator | 2026-01-03 00:21:06.582502 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-03 00:21:06.582522 | orchestrator | Saturday 03 January 2026 00:21:00 +0000 (0:00:00.075) 0:00:03.902 ****** 2026-01-03 00:21:06.582543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-03 00:21:06.582562 | orchestrator | 2026-01-03 00:21:06.582596 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-03 00:21:06.582619 | orchestrator | Saturday 03 January 2026 00:21:00 +0000 (0:00:00.083) 0:00:03.986 ****** 2026-01-03 00:21:06.582640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:21:06.582657 | orchestrator | 2026-01-03 00:21:06.582671 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-03 00:21:06.582726 | orchestrator | Saturday 03 January 2026 00:21:00 +0000 (0:00:00.063) 0:00:04.049 ****** 2026-01-03 00:21:06.582742 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:06.582755 | orchestrator | 2026-01-03 00:21:06.582769 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-03 00:21:06.582782 | orchestrator | Saturday 03 January 2026 00:21:02 +0000 (0:00:01.054) 0:00:05.104 ****** 2026-01-03 00:21:06.582795 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:06.582808 | orchestrator | 2026-01-03 00:21:06.582821 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-03 00:21:06.582835 | orchestrator | Saturday 03 January 2026 00:21:02 +0000 (0:00:00.049) 0:00:05.153 ****** 2026-01-03 00:21:06.582848 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:06.582862 | orchestrator | 2026-01-03 00:21:06.582882 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-03 00:21:06.582903 | orchestrator | Saturday 03 January 2026 00:21:02 +0000 (0:00:00.502) 0:00:05.655 ****** 2026-01-03 00:21:06.582922 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:06.582944 | orchestrator | 2026-01-03 00:21:06.582963 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-03 00:21:06.582983 | orchestrator | Saturday 03 January 2026 00:21:02 +0000 (0:00:00.073) 0:00:05.729 ****** 2026-01-03 00:21:06.582995 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:06.583006 | orchestrator | 2026-01-03 00:21:06.583017 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-03 00:21:06.583028 | orchestrator | Saturday 03 January 2026 00:21:03 +0000 (0:00:00.530) 0:00:06.260 ****** 2026-01-03 00:21:06.583055 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:06.583088 | orchestrator | 2026-01-03 00:21:06.583099 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-03 00:21:06.583110 | orchestrator | Saturday 03 January 2026 00:21:04 +0000 (0:00:01.037) 0:00:07.298 ****** 2026-01-03 00:21:06.583121 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:06.583131 | orchestrator | 2026-01-03 00:21:06.583142 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-03 00:21:06.583153 | orchestrator | Saturday 03 January 2026 00:21:05 +0000 (0:00:00.942) 0:00:08.241 ****** 2026-01-03 00:21:06.583164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-03 00:21:06.583175 | orchestrator | 2026-01-03 00:21:06.583187 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-03 00:21:06.583198 | orchestrator | Saturday 03 January 2026 00:21:05 +0000 (0:00:00.084) 0:00:08.325 ****** 2026-01-03 00:21:06.583208 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:06.583219 | orchestrator | 2026-01-03 00:21:06.583230 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:21:06.583241 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:21:06.583252 | orchestrator | 2026-01-03 00:21:06.583263 | orchestrator | 2026-01-03 00:21:06.583273 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:21:06.583284 | orchestrator | Saturday 03 January 2026 00:21:06 +0000 (0:00:01.128) 0:00:09.453 ****** 2026-01-03 00:21:06.583295 | orchestrator | =============================================================================== 2026-01-03 00:21:06.583305 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2026-01-03 00:21:06.583316 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2026-01-03 00:21:06.583326 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2026-01-03 00:21:06.583337 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-01-03 00:21:06.583348 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2026-01-03 00:21:06.583358 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-01-03 00:21:06.583389 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-03 00:21:06.583401 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-03 00:21:06.583412 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-03 00:21:06.583422 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-01-03 00:21:06.583440 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-01-03 00:21:06.583452 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-01-03 00:21:06.583462 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-01-03 00:21:06.850634 | orchestrator | + osism apply sshconfig 2026-01-03 00:21:18.997593 | orchestrator | 2026-01-03 00:21:18 | INFO  | Task d7a119ac-8861-4bb7-a57d-729b6dbb6efe (sshconfig) was prepared for execution. 2026-01-03 00:21:18.997760 | orchestrator | 2026-01-03 00:21:18 | INFO  | It takes a moment until task d7a119ac-8861-4bb7-a57d-729b6dbb6efe (sshconfig) has been started and output is visible here. 2026-01-03 00:21:29.312419 | orchestrator | 2026-01-03 00:21:29.312535 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-03 00:21:29.312552 | orchestrator | 2026-01-03 00:21:29.312564 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-03 00:21:29.312576 | orchestrator | Saturday 03 January 2026 00:21:22 +0000 (0:00:00.117) 0:00:00.117 ****** 2026-01-03 00:21:29.312615 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:29.312628 | orchestrator | 2026-01-03 00:21:29.312640 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-03 00:21:29.312651 | orchestrator | Saturday 03 January 2026 00:21:23 +0000 (0:00:00.511) 0:00:00.629 ****** 2026-01-03 00:21:29.312662 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:29.312673 | orchestrator | 2026-01-03 00:21:29.312753 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-03 00:21:29.312764 | orchestrator | Saturday 03 January 2026 00:21:23 +0000 (0:00:00.434) 0:00:01.064 ****** 2026-01-03 00:21:29.312775 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:21:29.312786 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:21:29.312798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:21:29.312808 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:21:29.312819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:21:29.312830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:21:29.312841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:21:29.312852 | orchestrator | 2026-01-03 00:21:29.312863 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-03 00:21:29.312874 | orchestrator | Saturday 03 January 2026 00:21:28 +0000 (0:00:05.050) 0:00:06.114 ****** 2026-01-03 00:21:29.312884 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:29.312895 | orchestrator | 2026-01-03 00:21:29.312906 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-03 00:21:29.312918 | orchestrator | Saturday 03 January 2026 00:21:28 +0000 (0:00:00.066) 0:00:06.181 ****** 2026-01-03 00:21:29.312928 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:29.312939 | orchestrator | 2026-01-03 00:21:29.312950 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:21:29.312962 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:21:29.312973 | orchestrator | 2026-01-03 00:21:29.312984 | orchestrator | 2026-01-03 00:21:29.312995 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:21:29.313006 | orchestrator | Saturday 03 January 2026 00:21:29 +0000 (0:00:00.503) 0:00:06.685 ****** 2026-01-03 00:21:29.313017 | orchestrator | =============================================================================== 2026-01-03 00:21:29.313027 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.05s 2026-01-03 00:21:29.313038 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-01-03 00:21:29.313049 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-01-03 00:21:29.313060 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.43s 2026-01-03 00:21:29.313070 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-03 00:21:29.489464 | orchestrator | + osism apply known-hosts 2026-01-03 00:21:41.502984 | orchestrator | 2026-01-03 00:21:41 | INFO  | Task 29e43489-ea39-48c9-a543-ed8d4eebcc2d (known-hosts) was prepared for execution. 2026-01-03 00:21:41.503121 | orchestrator | 2026-01-03 00:21:41 | INFO  | It takes a moment until task 29e43489-ea39-48c9-a543-ed8d4eebcc2d (known-hosts) has been started and output is visible here. 2026-01-03 00:21:57.519849 | orchestrator | 2026-01-03 00:21:57.519960 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-03 00:21:57.519978 | orchestrator | 2026-01-03 00:21:57.519991 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-03 00:21:57.520004 | orchestrator | Saturday 03 January 2026 00:21:45 +0000 (0:00:00.117) 0:00:00.117 ****** 2026-01-03 00:21:57.520016 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:21:57.520047 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:21:57.520058 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:21:57.520069 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:21:57.520080 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:21:57.520091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:21:57.520101 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:21:57.520112 | orchestrator | 2026-01-03 00:21:57.520124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-03 00:21:57.520144 | orchestrator | Saturday 03 January 2026 00:21:50 +0000 (0:00:05.642) 0:00:05.759 ****** 2026-01-03 00:21:57.520157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-03 00:21:57.520170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-03 00:21:57.520181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-03 00:21:57.520192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-03 00:21:57.520203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-03 00:21:57.520214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-03 00:21:57.520225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-03 00:21:57.520236 | orchestrator | 2026-01-03 00:21:57.520247 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520258 | orchestrator | Saturday 03 January 2026 00:21:51 +0000 (0:00:00.182) 0:00:05.942 ****** 2026-01-03 00:21:57.520269 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDfS8sJSWdLKxSg/ZUomgM5r2jfh106yamN4qHUFMoWD) 2026-01-03 00:21:57.520284 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC32Yd6T0ubDAKfIq9HSAGUFtv7TnFyOdJGego7OjRa7430j+LgtxrzfCtwLacNDP7l/NJ+3GUKjEQx7tjjFXeCUE1HYtzVmM996DvuYBCoRlViCmZQa0I0Z5mfGZRudVfccJ08WKW4grWKvZXjLlATF/S3jPGQmJSxDvBVbAZyQ9lkzeyiJu5/b8QnXRFuMcNZ6nizyJhfcFAhOMj/K7Is0TjxmlZpX9Z6vSGS8I+GsFhw7B/JFd9AmPDeAkK3riBvfbiuXKkp4TA9WEOWGiVlW714FCBXPZgjMia+yYv4/Lxrru6EVV82Iy1FjotAKCzXDxvMPclbh2BmUJ8Us9oa0hh1VblR9lVnGE8j+cjHtTfvE9R/f1u+w0qhA5pW7spyViUR+e0rjghPx090aFK2G/p10WUOhtzUFSL+L3NZgA3cAw7UtYt/FpNnzKYOAJovgZFIN1CTrlu8lmYsnur1/fC4p2QNasXGq8v8MR+RAsOvOknlcw0C1qnXBlJOu3U=) 2026-01-03 00:21:57.520300 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCTU4xEEF+VO7s3VidCTdZdF3enviW5B8iPbQCmyMdLRjQuQyg61S/1hGD/L+OlNjxAUilhxD6nRnTkpgnZnxvU=) 2026-01-03 00:21:57.520312 | orchestrator | 2026-01-03 00:21:57.520323 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520334 | orchestrator | Saturday 03 January 2026 00:21:52 +0000 (0:00:01.164) 0:00:07.107 ****** 2026-01-03 00:21:57.520370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1Cp0OYTSMqIo0Q90KpZdB+gjoh1ehow0TEpHWMSl/8f6rSqca7GwipOpUVR793DmkN5vHG70qlE4E76C0qwwOwTUEEy/JhkMXeNEJh1U70gytdxm54iBZOGoaBRmr3Tg7eqfY/JStYqiX7einUWwhlP2JyNwwBCV2XBbSIGnauDrShn9lyaRzsZA0RTwaukpGa1EhMu3TAdra+5HpZAC4VAE4fSA3I9jhxeRlDvDXgZ5HI3vNQ10pLYfwf7lHFXgz3R4T97iiJ4vEUisITLAPMPcLPYK3qkACjed3jFaXmXsAKh0binoZ3FZpKSKEnO5P/p3+SRk+3P6ofaA5AVXN/dDLP5npuI1cL0Fi5x7eOG7f+FjY8EHWdeCjsbL+PNrTJ1KEjcrdRmCMdkiRrs56IChi4JropZkG8jRdMuuta3TfOqYthCuHuK1fQjM2f1bZVuSvVoyLSI9tmsf4seEgQolsc5o3luGGXqlvQyWdMwG7/45b354RhE/Wa7RTBPE=) 2026-01-03 00:21:57.520394 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5PgQbD1JjMzfm1vhWgSx5kIn5lCEgVQMPbD9hiu7/8) 2026-01-03 00:21:57.520408 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDvh07/bIEy4mvLKWErJ1gzjpzGjQdxp0UEv8EDu95nkUWDD2muI4+ac2YXDARj4BMv374JMor6EkoBN3efiIdM=) 2026-01-03 00:21:57.520420 | orchestrator | 2026-01-03 00:21:57.520432 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520446 | orchestrator | Saturday 03 January 2026 00:21:53 +0000 (0:00:01.112) 0:00:08.219 ****** 2026-01-03 00:21:57.520460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAHp5WcvkoLhzAsP7vsLAwS362TYmZ1ByHGFPDWSLUng3BoMmB3ygegmPcEap2PoThHWWQWLzdYnpvn7FLgqMC8vgHTQIUjEwLlltyymJjqGeTEIvDhDrUC5cpoJ79ITBumFr8inFVZQGuMYDviqH4Cd7MbwDYxuZ+GV5BSkgmnppUXjVoSMoptNAmhnrSpTTFJby5waDONWmsDfj+WqNGe/zRdhU16SQ5tsRd1AB4nZyu5Cy9gf2KJDjs3Mk8YXyoR0HfU4f+EOdkIArSm9xa6n54XFw2P3OAIPGSo8uninWvtpZ31/qIY7kn+v2PApLvkjCIK3zxs8cz/b72Mx9yC7SuEtfK34+HaLhxunGykNH4s8CRcgddxxNoRdXFUErJMqbLigX4IZnASL8zWH6qYfv5xzzd7P1A6uAA1rR7bl/JTq2I7japMJheK2gMFCpHlCXFm8aFhNTupOqWLt/O0JH03f6tZQzXcqKMJPID7PzFr5R+sMJkrmnqwGuC0dc=) 2026-01-03 00:21:57.520532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGG/JRYzoige3znfO0e8AQE+AFn6FdvXQkNPEs07VR7YH+Tis49wcIblr3D1nx6ACbuSzixBH4DIFSH/YZajauo=) 2026-01-03 00:21:57.520546 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFGU30hG9DNpPnTJzZC9fCgJ2cVtx1k2VcSNg0+9SOM7) 2026-01-03 00:21:57.520559 | orchestrator | 2026-01-03 00:21:57.520572 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520584 | orchestrator | Saturday 03 January 2026 00:21:54 +0000 (0:00:01.046) 0:00:09.266 ****** 2026-01-03 00:21:57.520686 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI353RLwBMWsf7Ex7iWWXeEoicbr0Pnrh217xEsvzPDLwS/T7QObTxqO68aQ6KgDvcHhi8EcSv6t++3xMxRPE0g=) 2026-01-03 00:21:57.520731 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCoZM+M77AtnL67wQPMfushxM0mTx0kJC56YrHJmI7X) 2026-01-03 00:21:57.520751 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTpijislOPAVuWZ9sw3W4pyyvAEh1XD+ur/kjniRRA2U2bhVDvSXKuuJyS5yXSAxm+EWYsGP+ZlRjc822OOQ6To7cpzAzVkmcXVWM5YJ88HygBjGRxaqwNHvN00MA5uh+vKmaajOGnIJ23vFOet5oXhHbhXr8J8a7fYeMKY3jK91WPfX15eAR0zCbDQfB5fIjloHJoXICQgIrBk1m0skDQCDDd/SReN9q3AtD5N5AUmVsf288U4APy3YRiwn7TIVHXTkdRDH8Qj9YM2HKicpASvJFW0Rodvrr69kmw8CTePjr3jvtRrHm22my0pIyDss0tex8Cno2RfnE3JnMQcRAoVC+UDfvlSz7OjBPBP+/xj+HOZxOF0PxKuopofZqlybqwcCGb3Taa+78OM4gGs6X5uk00G4NPsz4awklZi7CgoG4/xPs9PVZduKjf3HEDImo3QNEa+9CHC+ggJHc7b+A5pw6JZs9pk0/tmlwGfOF64WvUYdtOJIm/9Tq+7sfyZQs=) 2026-01-03 00:21:57.520772 | orchestrator | 2026-01-03 00:21:57.520792 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520804 | orchestrator | Saturday 03 January 2026 00:21:55 +0000 (0:00:01.027) 0:00:10.293 ****** 2026-01-03 00:21:57.520815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlnTkfeWoLbGP3ETPPJyiYVqPlIuvPytn58OOSNDpx/AMTZ+iSVl2W0QlJr5bUKrHttEcreEp6zAQNOnitUq7o=) 2026-01-03 00:21:57.520827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKTfLyO5yPqEeV2qkk9V6vs9wHPUeNFZxKCvCY9rLwJ75x1VePImLHZB1Wd+qwSb+aPIBboQ0ui4cvcMyO1gTXMg6xvee+Qs4rXJJNOfqg7xgw+UH3Q6oXBUt272+4tIn1/YriSFscT3Ong+XEIH8SElbZy2IEd8ZzSYCdPOirz3Ik8V/21+aSVcXuN5G4Y46tx/SFeGq9IZ0juY3cai1LLfIJ2ZvnOCC2/7XSN00I9Auw6+1Qj84++7thfXf5SABiatvY99fJZmIIdpqT0uG3iivc+QmJDKcO+HYIkUzfUbHWaJ1c3cWGypcO8r89ASTwPGiPv4k80qSYqxyUe7ZJsJ2sII4i52tacIJXkJ17/Unod7ASznMR48gK3vWynh4yKCT7JgUugT4nxX5lSrgZsdB8Kak14zX1lBaRCnylAqpsfahJS8+RWUnwn74vgjSDx3MBdeadGqC1fQ7ZztBQRPj3mh2r9t5TFmqxO2m72M7EyB2tYzLA3i3q3/hDDMs=) 2026-01-03 00:21:57.520847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICu3d0Xf9sytIqmenjiXMSjraN8IDCIpcdd1Xs0LfZp6) 2026-01-03 00:21:57.520858 | orchestrator | 2026-01-03 00:21:57.520869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:21:57.520880 | orchestrator | Saturday 03 January 2026 00:21:56 +0000 (0:00:01.047) 0:00:11.340 ****** 2026-01-03 00:21:57.520901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdVkQ7ZMz0rs8MLSVelsQL7J/kz8xF9F/NY3tOfYUa6gD1FgYQgYuGH86e6sVylF5XTtHR08wtIzq95km/Lkx4=) 2026-01-03 00:22:08.171929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaqKes1/rGNTsp9c8oe9guQp62h4SxoB3bypnL3h+qFpf2XEJVoy8yEOFMAI4kyosJQySEFFxw6tU78jSWx9SbybySG4ImIvlkoUnHCom4DeheGSlfmpWAjMRt57cR0FWNsVfCocy6cbKI1emeZt5bLsVa2NNN32P+HBX98xKE1KFZwg1Npa8ifGP8wg3x1cLtb92uKGRbjIn/nXLs7wwfJ/fnWZiecGXj9Tx1UKvvnPB4PjJniiWlvpEPtssOuSi5h0MynSb6Ch+PqKYcFrr9DjElg3Xa6BNwCkaKxC4m3J/Gb/h7w8NCqedKN2A6bmICAtygSILBBd+xpdOvulHSZPd72RODBYFq+qEH65wk87QjJqNWPY1EMROouDNVXWd7e+IWfyTfI155VE0EOwQl4kHsf6BynBm+piuVLrzUlX071mRmsvsdBCgxMCn4WTyRkfgjNK5m6S8UnMgM86psBeLk/y4Rbkg00a5F7+XdZgiik/sOHXl2aOwyWqXS/eM=) 2026-01-03 00:22:08.172024 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGKoubHZS/jJec/s92w22Xru70UCqeL5bFF7IW1PbnjM) 2026-01-03 00:22:08.172037 | orchestrator | 2026-01-03 00:22:08.172046 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:08.172054 | orchestrator | Saturday 03 January 2026 00:21:57 +0000 (0:00:01.029) 0:00:12.370 ****** 2026-01-03 00:22:08.172061 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDArI09OA+za8RAgFXhLjV/12KlT0pc1LsoF514T2vocde4skkTNs/iaaSRdUimaKTXHVffTgMY5WQdRLXfaL5Izl2alRGusC+iGs69tr28wVMGaFdHKsbgNUKjxktAHcRlyAxclmvcnejgERvP8ZOoE/U3/sSbyszpAICfWrWQ5spk4XB8PNBS1E5/0IhPGKPUvK87z6vOpT2mCz/XKaCvjvDPAli5pSRnnMi3LctbYC+dQ6FbmHPon8dTIZSCniGNhhkmpmWvl0G3dHgpOBXDNpUKLijGrnr/mhStVO+8EW6WLeQrBhnzuOZEmndi51pFlTNt1ZXGc2btYElYOfy3LPpM8re7O0+335U/UajOtUyNoxiKEypWLYl5Xe/DF74ozEcLNdYEm6x/CPbDvGIrXiVQdXpleP3QRrgwcOzHbH/9lpbjtlw7E0MhC3DOCpKMbQSR+ofTwslFNKShSfZyKfv6zMamK+MYPiPNUzWtnB/7h/vQo0Lr6Tcp9o8ZSok=) 2026-01-03 00:22:08.172070 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkN51zaTDSFMsb/YKXfGUD3zjkxtIIhdsuoYGaiMXbQ/nxNUJ73uxdc4uwrJXg6FfAc8P2G0cgCSCae2CNNkTc=) 2026-01-03 00:22:08.172078 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqRVKlE1RMUBRQSWDHdNoo8BH4uA2CYB8Kwt26xDO96) 2026-01-03 00:22:08.172085 | orchestrator | 2026-01-03 00:22:08.172093 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-03 00:22:08.172100 | orchestrator | Saturday 03 January 2026 00:21:58 +0000 (0:00:01.009) 0:00:13.379 ****** 2026-01-03 00:22:08.172108 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:22:08.172116 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:22:08.172123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:22:08.172145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:22:08.172152 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:22:08.172174 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:22:08.172181 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:22:08.172188 | orchestrator | 2026-01-03 00:22:08.172195 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-03 00:22:08.172202 | orchestrator | Saturday 03 January 2026 00:22:03 +0000 (0:00:05.266) 0:00:18.645 ****** 2026-01-03 00:22:08.172210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-03 00:22:08.172218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-03 00:22:08.172225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-03 00:22:08.172232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-03 00:22:08.172239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-03 00:22:08.172245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-03 00:22:08.172252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-03 00:22:08.172259 | orchestrator | 2026-01-03 00:22:08.172277 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:08.172284 | orchestrator | Saturday 03 January 2026 00:22:03 +0000 (0:00:00.159) 0:00:18.805 ****** 2026-01-03 00:22:08.172291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDfS8sJSWdLKxSg/ZUomgM5r2jfh106yamN4qHUFMoWD) 2026-01-03 00:22:08.172299 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC32Yd6T0ubDAKfIq9HSAGUFtv7TnFyOdJGego7OjRa7430j+LgtxrzfCtwLacNDP7l/NJ+3GUKjEQx7tjjFXeCUE1HYtzVmM996DvuYBCoRlViCmZQa0I0Z5mfGZRudVfccJ08WKW4grWKvZXjLlATF/S3jPGQmJSxDvBVbAZyQ9lkzeyiJu5/b8QnXRFuMcNZ6nizyJhfcFAhOMj/K7Is0TjxmlZpX9Z6vSGS8I+GsFhw7B/JFd9AmPDeAkK3riBvfbiuXKkp4TA9WEOWGiVlW714FCBXPZgjMia+yYv4/Lxrru6EVV82Iy1FjotAKCzXDxvMPclbh2BmUJ8Us9oa0hh1VblR9lVnGE8j+cjHtTfvE9R/f1u+w0qhA5pW7spyViUR+e0rjghPx090aFK2G/p10WUOhtzUFSL+L3NZgA3cAw7UtYt/FpNnzKYOAJovgZFIN1CTrlu8lmYsnur1/fC4p2QNasXGq8v8MR+RAsOvOknlcw0C1qnXBlJOu3U=) 2026-01-03 00:22:08.172306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCTU4xEEF+VO7s3VidCTdZdF3enviW5B8iPbQCmyMdLRjQuQyg61S/1hGD/L+OlNjxAUilhxD6nRnTkpgnZnxvU=) 2026-01-03 00:22:08.172313 | orchestrator | 2026-01-03 00:22:08.172320 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:08.172327 | orchestrator | Saturday 03 January 2026 00:22:05 +0000 (0:00:01.080) 0:00:19.885 ****** 2026-01-03 00:22:08.172339 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1Cp0OYTSMqIo0Q90KpZdB+gjoh1ehow0TEpHWMSl/8f6rSqca7GwipOpUVR793DmkN5vHG70qlE4E76C0qwwOwTUEEy/JhkMXeNEJh1U70gytdxm54iBZOGoaBRmr3Tg7eqfY/JStYqiX7einUWwhlP2JyNwwBCV2XBbSIGnauDrShn9lyaRzsZA0RTwaukpGa1EhMu3TAdra+5HpZAC4VAE4fSA3I9jhxeRlDvDXgZ5HI3vNQ10pLYfwf7lHFXgz3R4T97iiJ4vEUisITLAPMPcLPYK3qkACjed3jFaXmXsAKh0binoZ3FZpKSKEnO5P/p3+SRk+3P6ofaA5AVXN/dDLP5npuI1cL0Fi5x7eOG7f+FjY8EHWdeCjsbL+PNrTJ1KEjcrdRmCMdkiRrs56IChi4JropZkG8jRdMuuta3TfOqYthCuHuK1fQjM2f1bZVuSvVoyLSI9tmsf4seEgQolsc5o3luGGXqlvQyWdMwG7/45b354RhE/Wa7RTBPE=) 2026-01-03 00:22:08.172357 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5PgQbD1JjMzfm1vhWgSx5kIn5lCEgVQMPbD9hiu7/8) 2026-01-03 00:22:08.172371 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDvh07/bIEy4mvLKWErJ1gzjpzGjQdxp0UEv8EDu95nkUWDD2muI4+ac2YXDARj4BMv374JMor6EkoBN3efiIdM=) 2026-01-03 00:22:08.172387 | orchestrator | 2026-01-03 00:22:08.172401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:08.172411 | orchestrator | Saturday 03 January 2026 00:22:06 +0000 (0:00:01.050) 0:00:20.936 ****** 2026-01-03 00:22:08.172421 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFGU30hG9DNpPnTJzZC9fCgJ2cVtx1k2VcSNg0+9SOM7) 2026-01-03 00:22:08.172431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAHp5WcvkoLhzAsP7vsLAwS362TYmZ1ByHGFPDWSLUng3BoMmB3ygegmPcEap2PoThHWWQWLzdYnpvn7FLgqMC8vgHTQIUjEwLlltyymJjqGeTEIvDhDrUC5cpoJ79ITBumFr8inFVZQGuMYDviqH4Cd7MbwDYxuZ+GV5BSkgmnppUXjVoSMoptNAmhnrSpTTFJby5waDONWmsDfj+WqNGe/zRdhU16SQ5tsRd1AB4nZyu5Cy9gf2KJDjs3Mk8YXyoR0HfU4f+EOdkIArSm9xa6n54XFw2P3OAIPGSo8uninWvtpZ31/qIY7kn+v2PApLvkjCIK3zxs8cz/b72Mx9yC7SuEtfK34+HaLhxunGykNH4s8CRcgddxxNoRdXFUErJMqbLigX4IZnASL8zWH6qYfv5xzzd7P1A6uAA1rR7bl/JTq2I7japMJheK2gMFCpHlCXFm8aFhNTupOqWLt/O0JH03f6tZQzXcqKMJPID7PzFr5R+sMJkrmnqwGuC0dc=) 2026-01-03 00:22:08.172443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGG/JRYzoige3znfO0e8AQE+AFn6FdvXQkNPEs07VR7YH+Tis49wcIblr3D1nx6ACbuSzixBH4DIFSH/YZajauo=) 2026-01-03 00:22:08.172453 | orchestrator | 2026-01-03 00:22:08.172463 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:08.172473 | orchestrator | Saturday 03 January 2026 00:22:07 +0000 (0:00:01.047) 0:00:21.983 ****** 2026-01-03 00:22:08.172483 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI353RLwBMWsf7Ex7iWWXeEoicbr0Pnrh217xEsvzPDLwS/T7QObTxqO68aQ6KgDvcHhi8EcSv6t++3xMxRPE0g=) 2026-01-03 00:22:08.172518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTpijislOPAVuWZ9sw3W4pyyvAEh1XD+ur/kjniRRA2U2bhVDvSXKuuJyS5yXSAxm+EWYsGP+ZlRjc822OOQ6To7cpzAzVkmcXVWM5YJ88HygBjGRxaqwNHvN00MA5uh+vKmaajOGnIJ23vFOet5oXhHbhXr8J8a7fYeMKY3jK91WPfX15eAR0zCbDQfB5fIjloHJoXICQgIrBk1m0skDQCDDd/SReN9q3AtD5N5AUmVsf288U4APy3YRiwn7TIVHXTkdRDH8Qj9YM2HKicpASvJFW0Rodvrr69kmw8CTePjr3jvtRrHm22my0pIyDss0tex8Cno2RfnE3JnMQcRAoVC+UDfvlSz7OjBPBP+/xj+HOZxOF0PxKuopofZqlybqwcCGb3Taa+78OM4gGs6X5uk00G4NPsz4awklZi7CgoG4/xPs9PVZduKjf3HEDImo3QNEa+9CHC+ggJHc7b+A5pw6JZs9pk0/tmlwGfOF64WvUYdtOJIm/9Tq+7sfyZQs=) 2026-01-03 00:22:12.460526 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCoZM+M77AtnL67wQPMfushxM0mTx0kJC56YrHJmI7X) 2026-01-03 00:22:12.460632 | orchestrator | 2026-01-03 00:22:12.460649 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:12.460662 | orchestrator | Saturday 03 January 2026 00:22:08 +0000 (0:00:01.039) 0:00:23.022 ****** 2026-01-03 00:22:12.460675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlnTkfeWoLbGP3ETPPJyiYVqPlIuvPytn58OOSNDpx/AMTZ+iSVl2W0QlJr5bUKrHttEcreEp6zAQNOnitUq7o=) 2026-01-03 00:22:12.460708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKTfLyO5yPqEeV2qkk9V6vs9wHPUeNFZxKCvCY9rLwJ75x1VePImLHZB1Wd+qwSb+aPIBboQ0ui4cvcMyO1gTXMg6xvee+Qs4rXJJNOfqg7xgw+UH3Q6oXBUt272+4tIn1/YriSFscT3Ong+XEIH8SElbZy2IEd8ZzSYCdPOirz3Ik8V/21+aSVcXuN5G4Y46tx/SFeGq9IZ0juY3cai1LLfIJ2ZvnOCC2/7XSN00I9Auw6+1Qj84++7thfXf5SABiatvY99fJZmIIdpqT0uG3iivc+QmJDKcO+HYIkUzfUbHWaJ1c3cWGypcO8r89ASTwPGiPv4k80qSYqxyUe7ZJsJ2sII4i52tacIJXkJ17/Unod7ASznMR48gK3vWynh4yKCT7JgUugT4nxX5lSrgZsdB8Kak14zX1lBaRCnylAqpsfahJS8+RWUnwn74vgjSDx3MBdeadGqC1fQ7ZztBQRPj3mh2r9t5TFmqxO2m72M7EyB2tYzLA3i3q3/hDDMs=) 2026-01-03 00:22:12.460788 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICu3d0Xf9sytIqmenjiXMSjraN8IDCIpcdd1Xs0LfZp6) 2026-01-03 00:22:12.460801 | orchestrator | 2026-01-03 00:22:12.460813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:12.460824 | orchestrator | Saturday 03 January 2026 00:22:09 +0000 (0:00:01.063) 0:00:24.085 ****** 2026-01-03 00:22:12.460835 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGKoubHZS/jJec/s92w22Xru70UCqeL5bFF7IW1PbnjM) 2026-01-03 00:22:12.460847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaqKes1/rGNTsp9c8oe9guQp62h4SxoB3bypnL3h+qFpf2XEJVoy8yEOFMAI4kyosJQySEFFxw6tU78jSWx9SbybySG4ImIvlkoUnHCom4DeheGSlfmpWAjMRt57cR0FWNsVfCocy6cbKI1emeZt5bLsVa2NNN32P+HBX98xKE1KFZwg1Npa8ifGP8wg3x1cLtb92uKGRbjIn/nXLs7wwfJ/fnWZiecGXj9Tx1UKvvnPB4PjJniiWlvpEPtssOuSi5h0MynSb6Ch+PqKYcFrr9DjElg3Xa6BNwCkaKxC4m3J/Gb/h7w8NCqedKN2A6bmICAtygSILBBd+xpdOvulHSZPd72RODBYFq+qEH65wk87QjJqNWPY1EMROouDNVXWd7e+IWfyTfI155VE0EOwQl4kHsf6BynBm+piuVLrzUlX071mRmsvsdBCgxMCn4WTyRkfgjNK5m6S8UnMgM86psBeLk/y4Rbkg00a5F7+XdZgiik/sOHXl2aOwyWqXS/eM=) 2026-01-03 00:22:12.460859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdVkQ7ZMz0rs8MLSVelsQL7J/kz8xF9F/NY3tOfYUa6gD1FgYQgYuGH86e6sVylF5XTtHR08wtIzq95km/Lkx4=) 2026-01-03 00:22:12.460870 | orchestrator | 2026-01-03 00:22:12.460881 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:12.460892 | orchestrator | Saturday 03 January 2026 00:22:10 +0000 (0:00:01.040) 0:00:25.126 ****** 2026-01-03 00:22:12.460903 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDArI09OA+za8RAgFXhLjV/12KlT0pc1LsoF514T2vocde4skkTNs/iaaSRdUimaKTXHVffTgMY5WQdRLXfaL5Izl2alRGusC+iGs69tr28wVMGaFdHKsbgNUKjxktAHcRlyAxclmvcnejgERvP8ZOoE/U3/sSbyszpAICfWrWQ5spk4XB8PNBS1E5/0IhPGKPUvK87z6vOpT2mCz/XKaCvjvDPAli5pSRnnMi3LctbYC+dQ6FbmHPon8dTIZSCniGNhhkmpmWvl0G3dHgpOBXDNpUKLijGrnr/mhStVO+8EW6WLeQrBhnzuOZEmndi51pFlTNt1ZXGc2btYElYOfy3LPpM8re7O0+335U/UajOtUyNoxiKEypWLYl5Xe/DF74ozEcLNdYEm6x/CPbDvGIrXiVQdXpleP3QRrgwcOzHbH/9lpbjtlw7E0MhC3DOCpKMbQSR+ofTwslFNKShSfZyKfv6zMamK+MYPiPNUzWtnB/7h/vQo0Lr6Tcp9o8ZSok=) 2026-01-03 00:22:12.460915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkN51zaTDSFMsb/YKXfGUD3zjkxtIIhdsuoYGaiMXbQ/nxNUJ73uxdc4uwrJXg6FfAc8P2G0cgCSCae2CNNkTc=) 2026-01-03 00:22:12.460926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqRVKlE1RMUBRQSWDHdNoo8BH4uA2CYB8Kwt26xDO96) 2026-01-03 00:22:12.460937 | orchestrator | 2026-01-03 00:22:12.460949 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-03 00:22:12.460960 | orchestrator | Saturday 03 January 2026 00:22:11 +0000 (0:00:01.050) 0:00:26.177 ****** 2026-01-03 00:22:12.460971 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-03 00:22:12.460983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-03 00:22:12.460993 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-03 00:22:12.461004 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-03 00:22:12.461015 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 00:22:12.461043 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-03 00:22:12.461055 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-03 00:22:12.461068 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:12.461081 | orchestrator | 2026-01-03 00:22:12.461093 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-03 00:22:12.461113 | orchestrator | Saturday 03 January 2026 00:22:11 +0000 (0:00:00.158) 0:00:26.335 ****** 2026-01-03 00:22:12.461126 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:12.461139 | orchestrator | 2026-01-03 00:22:12.461152 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-03 00:22:12.461164 | orchestrator | Saturday 03 January 2026 00:22:11 +0000 (0:00:00.052) 0:00:26.388 ****** 2026-01-03 00:22:12.461177 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:12.461190 | orchestrator | 2026-01-03 00:22:12.461203 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-03 00:22:12.461215 | orchestrator | Saturday 03 January 2026 00:22:11 +0000 (0:00:00.045) 0:00:26.433 ****** 2026-01-03 00:22:12.461227 | orchestrator | changed: [testbed-manager] 2026-01-03 00:22:12.461239 | orchestrator | 2026-01-03 00:22:12.461252 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:22:12.461265 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:22:12.461278 | orchestrator | 2026-01-03 00:22:12.461290 | orchestrator | 2026-01-03 00:22:12.461304 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:22:12.461317 | orchestrator | Saturday 03 January 2026 00:22:12 +0000 (0:00:00.700) 0:00:27.134 ****** 2026-01-03 00:22:12.461329 | orchestrator | =============================================================================== 2026-01-03 00:22:12.461342 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.64s 2026-01-03 00:22:12.461354 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.27s 2026-01-03 00:22:12.461367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-03 00:22:12.461379 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-03 00:22:12.461392 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-03 00:22:12.461404 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-03 00:22:12.461416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:12.461427 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:12.461438 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:12.461449 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:12.461460 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:12.461470 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-03 00:22:12.461481 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-03 00:22:12.461492 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-03 00:22:12.461510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-03 00:22:12.461522 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-03 00:22:12.461533 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-01-03 00:22:12.461544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-03 00:22:12.461555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-01-03 00:22:12.461566 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-03 00:22:12.812141 | orchestrator | + osism apply squid 2026-01-03 00:22:24.862283 | orchestrator | 2026-01-03 00:22:24 | INFO  | Task c4325327-c510-410d-9952-e4dde0a27a9f (squid) was prepared for execution. 2026-01-03 00:22:24.862395 | orchestrator | 2026-01-03 00:22:24 | INFO  | It takes a moment until task c4325327-c510-410d-9952-e4dde0a27a9f (squid) has been started and output is visible here. 2026-01-03 00:24:21.869825 | orchestrator | 2026-01-03 00:24:21.869980 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-03 00:24:21.870004 | orchestrator | 2026-01-03 00:24:21.870092 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-03 00:24:21.870111 | orchestrator | Saturday 03 January 2026 00:22:28 +0000 (0:00:00.158) 0:00:00.158 ****** 2026-01-03 00:24:21.870128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:24:21.870146 | orchestrator | 2026-01-03 00:24:21.870163 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-03 00:24:21.870180 | orchestrator | Saturday 03 January 2026 00:22:29 +0000 (0:00:00.081) 0:00:00.240 ****** 2026-01-03 00:24:21.870197 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:21.870215 | orchestrator | 2026-01-03 00:24:21.870232 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-03 00:24:21.870248 | orchestrator | Saturday 03 January 2026 00:22:30 +0000 (0:00:01.524) 0:00:01.764 ****** 2026-01-03 00:24:21.870265 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-03 00:24:21.870282 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-03 00:24:21.870298 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-03 00:24:21.870315 | orchestrator | 2026-01-03 00:24:21.870330 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-03 00:24:21.870347 | orchestrator | Saturday 03 January 2026 00:22:31 +0000 (0:00:01.157) 0:00:02.922 ****** 2026-01-03 00:24:21.870365 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-03 00:24:21.870382 | orchestrator | 2026-01-03 00:24:21.870400 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-03 00:24:21.870417 | orchestrator | Saturday 03 January 2026 00:22:32 +0000 (0:00:01.035) 0:00:03.958 ****** 2026-01-03 00:24:21.870433 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:21.870449 | orchestrator | 2026-01-03 00:24:21.870466 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-03 00:24:21.870483 | orchestrator | Saturday 03 January 2026 00:22:33 +0000 (0:00:00.352) 0:00:04.310 ****** 2026-01-03 00:24:21.870500 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:21.870517 | orchestrator | 2026-01-03 00:24:21.870535 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-03 00:24:21.870573 | orchestrator | Saturday 03 January 2026 00:22:34 +0000 (0:00:00.929) 0:00:05.240 ****** 2026-01-03 00:24:21.870591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-03 00:24:21.870608 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:21.870625 | orchestrator | 2026-01-03 00:24:21.870642 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-03 00:24:21.870658 | orchestrator | Saturday 03 January 2026 00:23:05 +0000 (0:00:31.093) 0:00:36.333 ****** 2026-01-03 00:24:21.870676 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:21.870691 | orchestrator | 2026-01-03 00:24:21.870708 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-03 00:24:21.870726 | orchestrator | Saturday 03 January 2026 00:23:20 +0000 (0:00:15.746) 0:00:52.080 ****** 2026-01-03 00:24:21.870743 | orchestrator | Pausing for 60 seconds 2026-01-03 00:24:21.870759 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:21.870776 | orchestrator | 2026-01-03 00:24:21.870792 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-03 00:24:21.870808 | orchestrator | Saturday 03 January 2026 00:24:20 +0000 (0:01:00.090) 0:01:52.170 ****** 2026-01-03 00:24:21.870826 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:21.870844 | orchestrator | 2026-01-03 00:24:21.870862 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-03 00:24:21.870910 | orchestrator | Saturday 03 January 2026 00:24:21 +0000 (0:00:00.067) 0:01:52.238 ****** 2026-01-03 00:24:21.870953 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:21.870970 | orchestrator | 2026-01-03 00:24:21.870987 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:24:21.871005 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:24:21.871023 | orchestrator | 2026-01-03 00:24:21.871038 | orchestrator | 2026-01-03 00:24:21.871056 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:24:21.871072 | orchestrator | Saturday 03 January 2026 00:24:21 +0000 (0:00:00.626) 0:01:52.864 ****** 2026-01-03 00:24:21.871088 | orchestrator | =============================================================================== 2026-01-03 00:24:21.871104 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-03 00:24:21.871120 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.09s 2026-01-03 00:24:21.871136 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.75s 2026-01-03 00:24:21.871153 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.52s 2026-01-03 00:24:21.871170 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2026-01-03 00:24:21.871186 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-01-03 00:24:21.871204 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-01-03 00:24:21.871221 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-01-03 00:24:21.871237 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-01-03 00:24:21.871252 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-03 00:24:21.871267 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-03 00:24:22.213358 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:24:22.213506 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-03 00:24:22.218763 | orchestrator | + set -e 2026-01-03 00:24:22.219126 | orchestrator | + NAMESPACE=kolla 2026-01-03 00:24:22.219170 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-03 00:24:22.222954 | orchestrator | ++ semver latest 9.0.0 2026-01-03 00:24:22.266342 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-03 00:24:22.266424 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:24:22.266762 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-03 00:24:34.422173 | orchestrator | 2026-01-03 00:24:34 | INFO  | Task cca3dbd9-9811-4d2b-b9d5-a2af962c29f0 (operator) was prepared for execution. 2026-01-03 00:24:34.422281 | orchestrator | 2026-01-03 00:24:34 | INFO  | It takes a moment until task cca3dbd9-9811-4d2b-b9d5-a2af962c29f0 (operator) has been started and output is visible here. 2026-01-03 00:24:50.682915 | orchestrator | 2026-01-03 00:24:50.683050 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-03 00:24:50.683062 | orchestrator | 2026-01-03 00:24:50.683071 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:24:50.683079 | orchestrator | Saturday 03 January 2026 00:24:38 +0000 (0:00:00.103) 0:00:00.103 ****** 2026-01-03 00:24:50.683088 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:24:50.683096 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:24:50.683104 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:24:50.683111 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:24:50.683118 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:24:50.683129 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:24:50.683136 | orchestrator | 2026-01-03 00:24:50.683144 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-03 00:24:50.683151 | orchestrator | Saturday 03 January 2026 00:24:42 +0000 (0:00:04.277) 0:00:04.381 ****** 2026-01-03 00:24:50.683178 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:24:50.683186 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:24:50.683193 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:24:50.683200 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:24:50.683208 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:24:50.683215 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:24:50.683222 | orchestrator | 2026-01-03 00:24:50.683230 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-03 00:24:50.683237 | orchestrator | 2026-01-03 00:24:50.683245 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-03 00:24:50.683252 | orchestrator | Saturday 03 January 2026 00:24:43 +0000 (0:00:00.735) 0:00:05.116 ****** 2026-01-03 00:24:50.683259 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:24:50.683267 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:24:50.683274 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:24:50.683282 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:24:50.683289 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:24:50.683296 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:24:50.683303 | orchestrator | 2026-01-03 00:24:50.683311 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-03 00:24:50.683318 | orchestrator | Saturday 03 January 2026 00:24:43 +0000 (0:00:00.153) 0:00:05.269 ****** 2026-01-03 00:24:50.683326 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:24:50.683333 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:24:50.683340 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:24:50.683347 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:24:50.683354 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:24:50.683361 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:24:50.683369 | orchestrator | 2026-01-03 00:24:50.683376 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-03 00:24:50.683383 | orchestrator | Saturday 03 January 2026 00:24:43 +0000 (0:00:00.133) 0:00:05.403 ****** 2026-01-03 00:24:50.683391 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:50.683413 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:50.683420 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:50.683428 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:50.683435 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:50.683443 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:50.683450 | orchestrator | 2026-01-03 00:24:50.683457 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-03 00:24:50.683464 | orchestrator | Saturday 03 January 2026 00:24:44 +0000 (0:00:00.629) 0:00:06.033 ****** 2026-01-03 00:24:50.683472 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:50.683479 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:50.683488 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:50.683497 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:50.683505 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:50.683514 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:50.683522 | orchestrator | 2026-01-03 00:24:50.683531 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-03 00:24:50.683539 | orchestrator | Saturday 03 January 2026 00:24:44 +0000 (0:00:00.792) 0:00:06.825 ****** 2026-01-03 00:24:50.683548 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-03 00:24:50.683558 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-03 00:24:50.683567 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-03 00:24:50.683580 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-03 00:24:50.683592 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-03 00:24:50.683604 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-03 00:24:50.683616 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-03 00:24:50.683629 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-03 00:24:50.683640 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-03 00:24:50.683653 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-03 00:24:50.683675 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-03 00:24:50.683687 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-03 00:24:50.683699 | orchestrator | 2026-01-03 00:24:50.683711 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-03 00:24:50.683723 | orchestrator | Saturday 03 January 2026 00:24:46 +0000 (0:00:01.187) 0:00:08.013 ****** 2026-01-03 00:24:50.683735 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:50.683748 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:50.683760 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:50.683773 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:50.683785 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:50.683799 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:50.683812 | orchestrator | 2026-01-03 00:24:50.683825 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-03 00:24:50.683839 | orchestrator | Saturday 03 January 2026 00:24:47 +0000 (0:00:01.197) 0:00:09.210 ****** 2026-01-03 00:24:50.683852 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-03 00:24:50.683864 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-03 00:24:50.683876 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-03 00:24:50.683887 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.683919 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.683932 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.683944 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.683956 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.684098 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:24:50.684121 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684128 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684136 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684143 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684150 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684157 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-03 00:24:50.684164 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684171 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684178 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684185 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684193 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684207 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:24:50.684215 | orchestrator | 2026-01-03 00:24:50.684222 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-03 00:24:50.684230 | orchestrator | Saturday 03 January 2026 00:24:48 +0000 (0:00:01.252) 0:00:10.462 ****** 2026-01-03 00:24:50.684237 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:50.684244 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:50.684251 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:50.684258 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:50.684265 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:50.684272 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:50.684279 | orchestrator | 2026-01-03 00:24:50.684287 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-03 00:24:50.684294 | orchestrator | Saturday 03 January 2026 00:24:48 +0000 (0:00:00.149) 0:00:10.612 ****** 2026-01-03 00:24:50.684309 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:50.684317 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:50.684324 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:50.684331 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:50.684338 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:50.684345 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:50.684352 | orchestrator | 2026-01-03 00:24:50.684359 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-03 00:24:50.684366 | orchestrator | Saturday 03 January 2026 00:24:48 +0000 (0:00:00.169) 0:00:10.782 ****** 2026-01-03 00:24:50.684373 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:50.684381 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:50.684388 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:50.684395 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:50.684402 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:50.684409 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:50.684416 | orchestrator | 2026-01-03 00:24:50.684423 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-03 00:24:50.684430 | orchestrator | Saturday 03 January 2026 00:24:49 +0000 (0:00:00.556) 0:00:11.338 ****** 2026-01-03 00:24:50.684437 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:50.684444 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:50.684451 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:50.684458 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:50.684465 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:50.684472 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:50.684480 | orchestrator | 2026-01-03 00:24:50.684487 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-03 00:24:50.684494 | orchestrator | Saturday 03 January 2026 00:24:49 +0000 (0:00:00.163) 0:00:11.502 ****** 2026-01-03 00:24:50.684501 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:24:50.684509 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:50.684516 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-03 00:24:50.684523 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:50.684530 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:24:50.684537 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:50.684544 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:24:50.684551 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:50.684558 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-03 00:24:50.684566 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:50.684573 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:24:50.684580 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:50.684587 | orchestrator | 2026-01-03 00:24:50.684596 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-03 00:24:50.684608 | orchestrator | Saturday 03 January 2026 00:24:50 +0000 (0:00:00.794) 0:00:12.297 ****** 2026-01-03 00:24:50.684624 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:50.684643 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:50.684653 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:50.684664 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:50.684675 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:50.684686 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:50.684697 | orchestrator | 2026-01-03 00:24:50.684732 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-03 00:24:50.684743 | orchestrator | Saturday 03 January 2026 00:24:50 +0000 (0:00:00.151) 0:00:12.448 ****** 2026-01-03 00:24:50.684754 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:50.684765 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:50.684777 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:50.684788 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:50.684815 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:52.048210 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:52.048313 | orchestrator | 2026-01-03 00:24:52.048324 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-03 00:24:52.048331 | orchestrator | Saturday 03 January 2026 00:24:50 +0000 (0:00:00.136) 0:00:12.585 ****** 2026-01-03 00:24:52.048338 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:52.048344 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:52.048350 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:52.048357 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:52.048363 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:52.048369 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:52.048376 | orchestrator | 2026-01-03 00:24:52.048382 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-03 00:24:52.048389 | orchestrator | Saturday 03 January 2026 00:24:50 +0000 (0:00:00.132) 0:00:12.717 ****** 2026-01-03 00:24:52.048395 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:24:52.048401 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:24:52.048407 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:24:52.048413 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:24:52.048419 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:24:52.048425 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:24:52.048432 | orchestrator | 2026-01-03 00:24:52.048438 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-03 00:24:52.048444 | orchestrator | Saturday 03 January 2026 00:24:51 +0000 (0:00:00.770) 0:00:13.487 ****** 2026-01-03 00:24:52.048450 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:24:52.048456 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:24:52.048463 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:24:52.048469 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:24:52.048475 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:24:52.048481 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:24:52.048487 | orchestrator | 2026-01-03 00:24:52.048494 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:24:52.048501 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048509 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048515 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048536 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048543 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048549 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:24:52.048555 | orchestrator | 2026-01-03 00:24:52.048561 | orchestrator | 2026-01-03 00:24:52.048568 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:24:52.048574 | orchestrator | Saturday 03 January 2026 00:24:51 +0000 (0:00:00.240) 0:00:13.727 ****** 2026-01-03 00:24:52.048580 | orchestrator | =============================================================================== 2026-01-03 00:24:52.048586 | orchestrator | Gathering Facts --------------------------------------------------------- 4.28s 2026-01-03 00:24:52.048592 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2026-01-03 00:24:52.048599 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-01-03 00:24:52.048605 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-01-03 00:24:52.048617 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.79s 2026-01-03 00:24:52.048623 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-01-03 00:24:52.048629 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.77s 2026-01-03 00:24:52.048635 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2026-01-03 00:24:52.048641 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-01-03 00:24:52.048647 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-01-03 00:24:52.048654 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-01-03 00:24:52.048660 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-01-03 00:24:52.048666 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-01-03 00:24:52.048672 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-01-03 00:24:52.048678 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-01-03 00:24:52.048684 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-01-03 00:24:52.048690 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-01-03 00:24:52.048697 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-01-03 00:24:52.048703 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-01-03 00:24:52.346888 | orchestrator | + osism apply --environment custom facts 2026-01-03 00:24:54.231658 | orchestrator | 2026-01-03 00:24:54 | INFO  | Trying to run play facts in environment custom 2026-01-03 00:25:04.483659 | orchestrator | 2026-01-03 00:25:04 | INFO  | Task 1bb902d0-ce32-4ad0-9ff6-b168ebe47279 (facts) was prepared for execution. 2026-01-03 00:25:04.483774 | orchestrator | 2026-01-03 00:25:04 | INFO  | It takes a moment until task 1bb902d0-ce32-4ad0-9ff6-b168ebe47279 (facts) has been started and output is visible here. 2026-01-03 00:25:48.200737 | orchestrator | 2026-01-03 00:25:48.200907 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-03 00:25:48.200934 | orchestrator | 2026-01-03 00:25:48.200954 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:25:48.200973 | orchestrator | Saturday 03 January 2026 00:25:08 +0000 (0:00:00.082) 0:00:00.082 ****** 2026-01-03 00:25:48.200991 | orchestrator | ok: [testbed-manager] 2026-01-03 00:25:48.201012 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:48.201033 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:48.201123 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.201143 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:48.201162 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.201182 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.201199 | orchestrator | 2026-01-03 00:25:48.201218 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-03 00:25:48.201261 | orchestrator | Saturday 03 January 2026 00:25:09 +0000 (0:00:01.348) 0:00:01.431 ****** 2026-01-03 00:25:48.201283 | orchestrator | ok: [testbed-manager] 2026-01-03 00:25:48.201305 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:48.201325 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.201344 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.201365 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:48.201384 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.201402 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:48.201419 | orchestrator | 2026-01-03 00:25:48.201438 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-03 00:25:48.201456 | orchestrator | 2026-01-03 00:25:48.201474 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:25:48.201525 | orchestrator | Saturday 03 January 2026 00:25:10 +0000 (0:00:01.145) 0:00:02.577 ****** 2026-01-03 00:25:48.201545 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.201563 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.201582 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.201602 | orchestrator | 2026-01-03 00:25:48.201620 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:25:48.201642 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.104) 0:00:02.681 ****** 2026-01-03 00:25:48.201653 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.201664 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.201675 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.201685 | orchestrator | 2026-01-03 00:25:48.201696 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:25:48.201707 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.200) 0:00:02.882 ****** 2026-01-03 00:25:48.201718 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.201729 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.201740 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.201750 | orchestrator | 2026-01-03 00:25:48.201761 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:25:48.201772 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.243) 0:00:03.125 ****** 2026-01-03 00:25:48.201784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:25:48.201796 | orchestrator | 2026-01-03 00:25:48.201807 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:25:48.201819 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.124) 0:00:03.249 ****** 2026-01-03 00:25:48.201829 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.201840 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.201851 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.201862 | orchestrator | 2026-01-03 00:25:48.201873 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:25:48.201884 | orchestrator | Saturday 03 January 2026 00:25:12 +0000 (0:00:00.554) 0:00:03.803 ****** 2026-01-03 00:25:48.201895 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:48.201906 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:48.201917 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:48.201927 | orchestrator | 2026-01-03 00:25:48.201938 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:25:48.201949 | orchestrator | Saturday 03 January 2026 00:25:12 +0000 (0:00:00.112) 0:00:03.916 ****** 2026-01-03 00:25:48.201960 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.201971 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.201982 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.201993 | orchestrator | 2026-01-03 00:25:48.202003 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:25:48.202014 | orchestrator | Saturday 03 January 2026 00:25:13 +0000 (0:00:01.060) 0:00:04.977 ****** 2026-01-03 00:25:48.202129 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.202141 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.202162 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.202174 | orchestrator | 2026-01-03 00:25:48.202185 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:25:48.202196 | orchestrator | Saturday 03 January 2026 00:25:13 +0000 (0:00:00.484) 0:00:05.461 ****** 2026-01-03 00:25:48.202207 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.202217 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.202228 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.202239 | orchestrator | 2026-01-03 00:25:48.202250 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:25:48.202260 | orchestrator | Saturday 03 January 2026 00:25:14 +0000 (0:00:01.102) 0:00:06.564 ****** 2026-01-03 00:25:48.202271 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.202294 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.202305 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.202315 | orchestrator | 2026-01-03 00:25:48.202326 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-03 00:25:48.202337 | orchestrator | Saturday 03 January 2026 00:25:31 +0000 (0:00:16.137) 0:00:22.702 ****** 2026-01-03 00:25:48.202348 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:48.202358 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:48.202369 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:48.202380 | orchestrator | 2026-01-03 00:25:48.202391 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-03 00:25:48.202426 | orchestrator | Saturday 03 January 2026 00:25:31 +0000 (0:00:00.103) 0:00:22.806 ****** 2026-01-03 00:25:48.202438 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:48.202449 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:48.202460 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:48.202470 | orchestrator | 2026-01-03 00:25:48.202481 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:25:48.202492 | orchestrator | Saturday 03 January 2026 00:25:38 +0000 (0:00:07.769) 0:00:30.575 ****** 2026-01-03 00:25:48.202503 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.202514 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.202525 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.202536 | orchestrator | 2026-01-03 00:25:48.202546 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-03 00:25:48.202557 | orchestrator | Saturday 03 January 2026 00:25:39 +0000 (0:00:00.504) 0:00:31.080 ****** 2026-01-03 00:25:48.202569 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-03 00:25:48.202580 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-03 00:25:48.202591 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-03 00:25:48.202602 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-03 00:25:48.202612 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-03 00:25:48.202623 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-03 00:25:48.202634 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-03 00:25:48.202645 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-03 00:25:48.202656 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-03 00:25:48.202667 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:25:48.202677 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:25:48.202688 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:25:48.202699 | orchestrator | 2026-01-03 00:25:48.202710 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:25:48.202721 | orchestrator | Saturday 03 January 2026 00:25:43 +0000 (0:00:03.580) 0:00:34.661 ****** 2026-01-03 00:25:48.202731 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.202742 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.202753 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.202764 | orchestrator | 2026-01-03 00:25:48.202780 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:25:48.202799 | orchestrator | 2026-01-03 00:25:48.202817 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:25:48.202836 | orchestrator | Saturday 03 January 2026 00:25:44 +0000 (0:00:01.439) 0:00:36.100 ****** 2026-01-03 00:25:48.202855 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:25:48.202874 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:25:48.202892 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:25:48.202912 | orchestrator | ok: [testbed-manager] 2026-01-03 00:25:48.202931 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:48.202959 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:48.202971 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:48.202982 | orchestrator | 2026-01-03 00:25:48.202993 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:25:48.203009 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:25:48.203028 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:25:48.203156 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:25:48.203180 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:25:48.203198 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:25:48.203214 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:25:48.203230 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:25:48.203247 | orchestrator | 2026-01-03 00:25:48.203262 | orchestrator | 2026-01-03 00:25:48.203282 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:25:48.203303 | orchestrator | Saturday 03 January 2026 00:25:48 +0000 (0:00:03.682) 0:00:39.782 ****** 2026-01-03 00:25:48.203322 | orchestrator | =============================================================================== 2026-01-03 00:25:48.203339 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.14s 2026-01-03 00:25:48.203356 | orchestrator | Install required packages (Debian) -------------------------------------- 7.77s 2026-01-03 00:25:48.203374 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-01-03 00:25:48.203392 | orchestrator | Copy fact files --------------------------------------------------------- 3.58s 2026-01-03 00:25:48.203410 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.44s 2026-01-03 00:25:48.203427 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-01-03 00:25:48.203464 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2026-01-03 00:25:48.422000 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-01-03 00:25:48.422160 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-01-03 00:25:48.422173 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.55s 2026-01-03 00:25:48.422183 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2026-01-03 00:25:48.422192 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-01-03 00:25:48.422201 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-01-03 00:25:48.422210 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-01-03 00:25:48.422236 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-01-03 00:25:48.422246 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-01-03 00:25:48.422254 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-01-03 00:25:48.422263 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-03 00:25:48.788879 | orchestrator | + osism apply bootstrap 2026-01-03 00:26:00.988253 | orchestrator | 2026-01-03 00:26:00 | INFO  | Task e7abe0f5-ee7c-4475-b001-511d45736137 (bootstrap) was prepared for execution. 2026-01-03 00:26:00.988348 | orchestrator | 2026-01-03 00:26:00 | INFO  | It takes a moment until task e7abe0f5-ee7c-4475-b001-511d45736137 (bootstrap) has been started and output is visible here. 2026-01-03 00:26:16.429127 | orchestrator | 2026-01-03 00:26:16.429273 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-03 00:26:16.429299 | orchestrator | 2026-01-03 00:26:16.429318 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-03 00:26:16.429335 | orchestrator | Saturday 03 January 2026 00:26:04 +0000 (0:00:00.112) 0:00:00.112 ****** 2026-01-03 00:26:16.429352 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:16.429371 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:16.429389 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:16.429407 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:16.429425 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:16.429442 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:16.429457 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:16.429468 | orchestrator | 2026-01-03 00:26:16.429478 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:26:16.429488 | orchestrator | 2026-01-03 00:26:16.429498 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:26:16.429508 | orchestrator | Saturday 03 January 2026 00:26:05 +0000 (0:00:00.184) 0:00:00.297 ****** 2026-01-03 00:26:16.429517 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:16.429527 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:16.429538 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:16.429548 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:16.429558 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:16.429567 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:16.429577 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:16.429588 | orchestrator | 2026-01-03 00:26:16.429600 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-03 00:26:16.429611 | orchestrator | 2026-01-03 00:26:16.429623 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:26:16.429634 | orchestrator | Saturday 03 January 2026 00:26:08 +0000 (0:00:03.729) 0:00:04.027 ****** 2026-01-03 00:26:16.429646 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-03 00:26:16.429658 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-03 00:26:16.429669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-03 00:26:16.429680 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-03 00:26:16.429691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:26:16.429703 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-03 00:26:16.429715 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-03 00:26:16.429740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:26:16.429753 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 00:26:16.429764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:26:16.429775 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-03 00:26:16.429787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:26:16.429798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-03 00:26:16.429809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-03 00:26:16.429821 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-03 00:26:16.429832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-03 00:26:16.429844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:26:16.429855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-03 00:26:16.429867 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-03 00:26:16.429878 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:16.429914 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:26:16.429927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-03 00:26:16.429938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:26:16.429949 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:16.429959 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:26:16.429968 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-03 00:26:16.429978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:26:16.429987 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:16.429997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-03 00:26:16.430006 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-03 00:26:16.430100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-03 00:26:16.430114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-03 00:26:16.430124 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-03 00:26:16.430134 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-03 00:26:16.430143 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:26:16.430153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-03 00:26:16.430185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-03 00:26:16.430210 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-03 00:26:16.430228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:26:16.430244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:26:16.430273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-03 00:26:16.430331 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-03 00:26:16.430344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:26:16.430354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:26:16.430364 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:16.430374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-03 00:26:16.430406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-03 00:26:16.430422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:26:16.430436 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:16.430450 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-03 00:26:16.430464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-03 00:26:16.430478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-03 00:26:16.430492 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:16.430505 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-03 00:26:16.430519 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-03 00:26:16.430533 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:16.430547 | orchestrator | 2026-01-03 00:26:16.430560 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-03 00:26:16.430575 | orchestrator | 2026-01-03 00:26:16.430591 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-03 00:26:16.430606 | orchestrator | Saturday 03 January 2026 00:26:09 +0000 (0:00:00.376) 0:00:04.403 ****** 2026-01-03 00:26:16.430619 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:16.430632 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:16.430647 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:16.430662 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:16.430677 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:16.430692 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:16.430707 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:16.430724 | orchestrator | 2026-01-03 00:26:16.430742 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-03 00:26:16.430774 | orchestrator | Saturday 03 January 2026 00:26:10 +0000 (0:00:01.192) 0:00:05.595 ****** 2026-01-03 00:26:16.430787 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:16.430797 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:16.430807 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:16.430817 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:16.430826 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:16.430836 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:16.430846 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:16.430855 | orchestrator | 2026-01-03 00:26:16.430865 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-03 00:26:16.430875 | orchestrator | Saturday 03 January 2026 00:26:11 +0000 (0:00:01.312) 0:00:06.908 ****** 2026-01-03 00:26:16.430885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:16.430903 | orchestrator | 2026-01-03 00:26:16.430919 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-03 00:26:16.430933 | orchestrator | Saturday 03 January 2026 00:26:12 +0000 (0:00:00.271) 0:00:07.180 ****** 2026-01-03 00:26:16.430949 | orchestrator | changed: [testbed-manager] 2026-01-03 00:26:16.430966 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:16.430983 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:16.431000 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:16.431016 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:16.431032 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:16.431042 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:16.431060 | orchestrator | 2026-01-03 00:26:16.431116 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-03 00:26:16.431131 | orchestrator | Saturday 03 January 2026 00:26:14 +0000 (0:00:01.946) 0:00:09.127 ****** 2026-01-03 00:26:16.431145 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:16.431163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:16.431182 | orchestrator | 2026-01-03 00:26:16.431199 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-03 00:26:16.431216 | orchestrator | Saturday 03 January 2026 00:26:14 +0000 (0:00:00.245) 0:00:09.373 ****** 2026-01-03 00:26:16.431232 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:16.431248 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:16.431265 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:16.431282 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:16.431293 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:16.431303 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:16.431312 | orchestrator | 2026-01-03 00:26:16.431322 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-03 00:26:16.431331 | orchestrator | Saturday 03 January 2026 00:26:15 +0000 (0:00:01.031) 0:00:10.404 ****** 2026-01-03 00:26:16.431341 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:16.431351 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:16.431360 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:16.431369 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:16.431379 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:16.431388 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:16.431397 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:16.431407 | orchestrator | 2026-01-03 00:26:16.431417 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-03 00:26:16.431427 | orchestrator | Saturday 03 January 2026 00:26:15 +0000 (0:00:00.577) 0:00:10.982 ****** 2026-01-03 00:26:16.431436 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:16.431455 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:16.431475 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:16.431485 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:16.431494 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:16.431504 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:16.431514 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:16.431523 | orchestrator | 2026-01-03 00:26:16.431533 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-03 00:26:16.431544 | orchestrator | Saturday 03 January 2026 00:26:16 +0000 (0:00:00.411) 0:00:11.393 ****** 2026-01-03 00:26:16.431553 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:16.431563 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:16.431585 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:29.370077 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:29.370223 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:29.370246 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:29.370263 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:29.370281 | orchestrator | 2026-01-03 00:26:29.370300 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-03 00:26:29.370319 | orchestrator | Saturday 03 January 2026 00:26:16 +0000 (0:00:00.207) 0:00:11.601 ****** 2026-01-03 00:26:29.370337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:29.370375 | orchestrator | 2026-01-03 00:26:29.370393 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-03 00:26:29.370410 | orchestrator | Saturday 03 January 2026 00:26:16 +0000 (0:00:00.284) 0:00:11.886 ****** 2026-01-03 00:26:29.370427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:29.370443 | orchestrator | 2026-01-03 00:26:29.370459 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-03 00:26:29.370475 | orchestrator | Saturday 03 January 2026 00:26:17 +0000 (0:00:00.282) 0:00:12.169 ****** 2026-01-03 00:26:29.370489 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.370505 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.370522 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.370538 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.370554 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.370570 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.370583 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.370598 | orchestrator | 2026-01-03 00:26:29.370615 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-03 00:26:29.370631 | orchestrator | Saturday 03 January 2026 00:26:18 +0000 (0:00:01.686) 0:00:13.856 ****** 2026-01-03 00:26:29.370647 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:29.370663 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:29.370678 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:29.370694 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:29.370711 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:29.370726 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:29.370740 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:29.370755 | orchestrator | 2026-01-03 00:26:29.370769 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-03 00:26:29.370786 | orchestrator | Saturday 03 January 2026 00:26:18 +0000 (0:00:00.208) 0:00:14.065 ****** 2026-01-03 00:26:29.370802 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.370818 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.370835 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.370851 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.370869 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.370917 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.370935 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.370951 | orchestrator | 2026-01-03 00:26:29.370968 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-03 00:26:29.370985 | orchestrator | Saturday 03 January 2026 00:26:20 +0000 (0:00:01.381) 0:00:15.446 ****** 2026-01-03 00:26:29.371001 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:29.371017 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:29.371033 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:29.371049 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:29.371066 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:29.371082 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:29.371199 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:29.371217 | orchestrator | 2026-01-03 00:26:29.371234 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-03 00:26:29.371252 | orchestrator | Saturday 03 January 2026 00:26:20 +0000 (0:00:00.319) 0:00:15.765 ****** 2026-01-03 00:26:29.371269 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.371286 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:29.371303 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:29.371319 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:29.371335 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:29.371351 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:29.371367 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:29.371382 | orchestrator | 2026-01-03 00:26:29.371399 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-03 00:26:29.371416 | orchestrator | Saturday 03 January 2026 00:26:21 +0000 (0:00:00.544) 0:00:16.310 ****** 2026-01-03 00:26:29.371433 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.371450 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:29.371466 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:29.371482 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:29.371513 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:29.371532 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:29.371548 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:29.371564 | orchestrator | 2026-01-03 00:26:29.371581 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-03 00:26:29.371597 | orchestrator | Saturday 03 January 2026 00:26:22 +0000 (0:00:01.120) 0:00:17.430 ****** 2026-01-03 00:26:29.371611 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.371626 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.371642 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.371658 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.371675 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.371692 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.371709 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.371726 | orchestrator | 2026-01-03 00:26:29.371742 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-03 00:26:29.371758 | orchestrator | Saturday 03 January 2026 00:26:23 +0000 (0:00:01.103) 0:00:18.534 ****** 2026-01-03 00:26:29.371804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:29.371823 | orchestrator | 2026-01-03 00:26:29.371842 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-03 00:26:29.371859 | orchestrator | Saturday 03 January 2026 00:26:23 +0000 (0:00:00.335) 0:00:18.869 ****** 2026-01-03 00:26:29.371875 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:29.371891 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:29.371907 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:29.371922 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:29.371937 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:29.371965 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:29.371980 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:29.371995 | orchestrator | 2026-01-03 00:26:29.372009 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:26:29.372024 | orchestrator | Saturday 03 January 2026 00:26:25 +0000 (0:00:01.268) 0:00:20.138 ****** 2026-01-03 00:26:29.372040 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.372055 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.372072 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.372114 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.372133 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.372149 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.372166 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.372182 | orchestrator | 2026-01-03 00:26:29.372199 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:26:29.372217 | orchestrator | Saturday 03 January 2026 00:26:25 +0000 (0:00:00.223) 0:00:20.361 ****** 2026-01-03 00:26:29.372234 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.372250 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.372267 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.372283 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.372300 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.372317 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.372334 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.372349 | orchestrator | 2026-01-03 00:26:29.372365 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:26:29.372381 | orchestrator | Saturday 03 January 2026 00:26:25 +0000 (0:00:00.213) 0:00:20.575 ****** 2026-01-03 00:26:29.372396 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.372412 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.372429 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.372446 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.372463 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.372478 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.372494 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.372511 | orchestrator | 2026-01-03 00:26:29.372527 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:26:29.372544 | orchestrator | Saturday 03 January 2026 00:26:25 +0000 (0:00:00.199) 0:00:20.774 ****** 2026-01-03 00:26:29.372562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:29.372580 | orchestrator | 2026-01-03 00:26:29.372596 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:26:29.372613 | orchestrator | Saturday 03 January 2026 00:26:25 +0000 (0:00:00.245) 0:00:21.020 ****** 2026-01-03 00:26:29.372629 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.372646 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.372663 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.372680 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.372696 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.372712 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.372728 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.372745 | orchestrator | 2026-01-03 00:26:29.372762 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:26:29.372778 | orchestrator | Saturday 03 January 2026 00:26:26 +0000 (0:00:00.516) 0:00:21.537 ****** 2026-01-03 00:26:29.372794 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:29.372810 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:29.372827 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:29.372844 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:29.372861 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:29.372878 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:29.372893 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:29.372924 | orchestrator | 2026-01-03 00:26:29.372941 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:26:29.372958 | orchestrator | Saturday 03 January 2026 00:26:26 +0000 (0:00:00.211) 0:00:21.748 ****** 2026-01-03 00:26:29.372975 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.372991 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.373007 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.373023 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.373040 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:29.373058 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:29.373075 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:29.373110 | orchestrator | 2026-01-03 00:26:29.373128 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:26:29.373143 | orchestrator | Saturday 03 January 2026 00:26:27 +0000 (0:00:01.049) 0:00:22.798 ****** 2026-01-03 00:26:29.373160 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.373176 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.373193 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.373210 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.373226 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:29.373242 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:29.373258 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:29.373275 | orchestrator | 2026-01-03 00:26:29.373292 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:26:29.373309 | orchestrator | Saturday 03 January 2026 00:26:28 +0000 (0:00:00.571) 0:00:23.369 ****** 2026-01-03 00:26:29.373325 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:29.373341 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:29.373357 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:29.373373 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:29.373404 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.277934 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.278117 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.278205 | orchestrator | 2026-01-03 00:27:08.278227 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:27:08.278247 | orchestrator | Saturday 03 January 2026 00:26:29 +0000 (0:00:01.111) 0:00:24.481 ****** 2026-01-03 00:27:08.278264 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.278275 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.278285 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.278295 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:08.278305 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.278315 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.278325 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.278335 | orchestrator | 2026-01-03 00:27:08.278345 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-03 00:27:08.278355 | orchestrator | Saturday 03 January 2026 00:26:45 +0000 (0:00:15.920) 0:00:40.401 ****** 2026-01-03 00:27:08.278365 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.278376 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.278386 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.278396 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.278406 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.278415 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.278425 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.278435 | orchestrator | 2026-01-03 00:27:08.278445 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-03 00:27:08.278455 | orchestrator | Saturday 03 January 2026 00:26:45 +0000 (0:00:00.219) 0:00:40.621 ****** 2026-01-03 00:27:08.278465 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.278476 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.278487 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.278499 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.278510 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.278521 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.278532 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.278568 | orchestrator | 2026-01-03 00:27:08.278581 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-03 00:27:08.278593 | orchestrator | Saturday 03 January 2026 00:26:45 +0000 (0:00:00.241) 0:00:40.862 ****** 2026-01-03 00:27:08.278604 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.278616 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.278628 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.278640 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.278651 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.278663 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.278674 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.278685 | orchestrator | 2026-01-03 00:27:08.278697 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-03 00:27:08.278709 | orchestrator | Saturday 03 January 2026 00:26:45 +0000 (0:00:00.210) 0:00:41.073 ****** 2026-01-03 00:27:08.278721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:08.278736 | orchestrator | 2026-01-03 00:27:08.278748 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-03 00:27:08.278760 | orchestrator | Saturday 03 January 2026 00:26:46 +0000 (0:00:00.291) 0:00:41.364 ****** 2026-01-03 00:27:08.278772 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.278784 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.278795 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.278806 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.278815 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.278825 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.278834 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.278844 | orchestrator | 2026-01-03 00:27:08.278853 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-03 00:27:08.278863 | orchestrator | Saturday 03 January 2026 00:26:48 +0000 (0:00:01.786) 0:00:43.150 ****** 2026-01-03 00:27:08.278873 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:08.278882 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:08.278892 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:08.278901 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.278911 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:08.278920 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.278930 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.278939 | orchestrator | 2026-01-03 00:27:08.278949 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-03 00:27:08.278959 | orchestrator | Saturday 03 January 2026 00:26:49 +0000 (0:00:01.044) 0:00:44.195 ****** 2026-01-03 00:27:08.278968 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.278978 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.279004 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.279014 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.279024 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.279033 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.279043 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.279052 | orchestrator | 2026-01-03 00:27:08.279062 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-03 00:27:08.279077 | orchestrator | Saturday 03 January 2026 00:26:49 +0000 (0:00:00.843) 0:00:45.038 ****** 2026-01-03 00:27:08.279088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:08.279100 | orchestrator | 2026-01-03 00:27:08.279110 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-03 00:27:08.279120 | orchestrator | Saturday 03 January 2026 00:26:50 +0000 (0:00:00.299) 0:00:45.338 ****** 2026-01-03 00:27:08.279172 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:08.279192 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:08.279208 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:08.279225 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:08.279242 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.279258 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.279274 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.279295 | orchestrator | 2026-01-03 00:27:08.279343 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-03 00:27:08.279361 | orchestrator | Saturday 03 January 2026 00:26:51 +0000 (0:00:01.121) 0:00:46.459 ****** 2026-01-03 00:27:08.279379 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:27:08.279398 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:27:08.279417 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:27:08.279433 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:27:08.279450 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:27:08.279466 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:27:08.279481 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:27:08.279498 | orchestrator | 2026-01-03 00:27:08.279514 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-03 00:27:08.279532 | orchestrator | Saturday 03 January 2026 00:26:51 +0000 (0:00:00.216) 0:00:46.676 ****** 2026-01-03 00:27:08.279550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:08.279568 | orchestrator | 2026-01-03 00:27:08.279586 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-03 00:27:08.279597 | orchestrator | Saturday 03 January 2026 00:26:51 +0000 (0:00:00.315) 0:00:46.992 ****** 2026-01-03 00:27:08.279606 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.279616 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.279625 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.279635 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.279644 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.279654 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.279663 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.279673 | orchestrator | 2026-01-03 00:27:08.279682 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-03 00:27:08.279692 | orchestrator | Saturday 03 January 2026 00:26:53 +0000 (0:00:01.998) 0:00:48.991 ****** 2026-01-03 00:27:08.279702 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:08.279711 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:08.279721 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.279731 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:08.279741 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:08.279750 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.279759 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.279769 | orchestrator | 2026-01-03 00:27:08.279778 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-03 00:27:08.279788 | orchestrator | Saturday 03 January 2026 00:26:54 +0000 (0:00:01.110) 0:00:50.101 ****** 2026-01-03 00:27:08.279797 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:08.279807 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:08.279816 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:08.279826 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:08.279835 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:08.279845 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:08.279854 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:08.279864 | orchestrator | 2026-01-03 00:27:08.279873 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-03 00:27:08.279883 | orchestrator | Saturday 03 January 2026 00:27:05 +0000 (0:00:10.746) 0:01:00.848 ****** 2026-01-03 00:27:08.279892 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.279913 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.279923 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.279933 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.279942 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.279952 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.279961 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.279971 | orchestrator | 2026-01-03 00:27:08.279980 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-03 00:27:08.279990 | orchestrator | Saturday 03 January 2026 00:27:06 +0000 (0:00:00.920) 0:01:01.769 ****** 2026-01-03 00:27:08.280000 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.280009 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.280019 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.280028 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.280038 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.280047 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.280057 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.280066 | orchestrator | 2026-01-03 00:27:08.280076 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-03 00:27:08.280086 | orchestrator | Saturday 03 January 2026 00:27:07 +0000 (0:00:00.909) 0:01:02.678 ****** 2026-01-03 00:27:08.280095 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.280105 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.280114 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.280124 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.280161 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.280171 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.280180 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.280190 | orchestrator | 2026-01-03 00:27:08.280199 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-03 00:27:08.280216 | orchestrator | Saturday 03 January 2026 00:27:07 +0000 (0:00:00.231) 0:01:02.910 ****** 2026-01-03 00:27:08.280227 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:08.280240 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:08.280257 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:08.280274 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:08.280288 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:08.280298 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:08.280308 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:08.280317 | orchestrator | 2026-01-03 00:27:08.280327 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-03 00:27:08.280337 | orchestrator | Saturday 03 January 2026 00:27:07 +0000 (0:00:00.202) 0:01:03.113 ****** 2026-01-03 00:27:08.280347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:08.280357 | orchestrator | 2026-01-03 00:27:08.280377 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-03 00:29:26.431559 | orchestrator | Saturday 03 January 2026 00:27:08 +0000 (0:00:00.281) 0:01:03.394 ****** 2026-01-03 00:29:26.431702 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.431726 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.431738 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.431751 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.431762 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.431774 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.431785 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.431796 | orchestrator | 2026-01-03 00:29:26.431808 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-03 00:29:26.431820 | orchestrator | Saturday 03 January 2026 00:27:09 +0000 (0:00:01.716) 0:01:05.111 ****** 2026-01-03 00:29:26.431831 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:26.431843 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:26.431854 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:26.431865 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:26.431910 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:26.431927 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:26.431938 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:26.431949 | orchestrator | 2026-01-03 00:29:26.431960 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-03 00:29:26.431972 | orchestrator | Saturday 03 January 2026 00:27:10 +0000 (0:00:00.613) 0:01:05.724 ****** 2026-01-03 00:29:26.431982 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.431993 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432004 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.432014 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432025 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.432035 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.432046 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.432057 | orchestrator | 2026-01-03 00:29:26.432068 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-03 00:29:26.432079 | orchestrator | Saturday 03 January 2026 00:27:10 +0000 (0:00:00.258) 0:01:05.982 ****** 2026-01-03 00:29:26.432092 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.432105 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432119 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.432131 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432143 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.432155 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.432168 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.432181 | orchestrator | 2026-01-03 00:29:26.432194 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-03 00:29:26.432207 | orchestrator | Saturday 03 January 2026 00:27:12 +0000 (0:00:01.177) 0:01:07.159 ****** 2026-01-03 00:29:26.432221 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:26.432233 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:26.432246 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:26.432352 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:26.432364 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:26.432375 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:26.432385 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:26.432396 | orchestrator | 2026-01-03 00:29:26.432407 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-03 00:29:26.432418 | orchestrator | Saturday 03 January 2026 00:27:13 +0000 (0:00:01.679) 0:01:08.839 ****** 2026-01-03 00:29:26.432428 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.432439 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432450 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432460 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.432471 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.432482 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.432493 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.432504 | orchestrator | 2026-01-03 00:29:26.432522 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-03 00:29:26.432540 | orchestrator | Saturday 03 January 2026 00:27:16 +0000 (0:00:02.397) 0:01:11.236 ****** 2026-01-03 00:29:26.432559 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.432578 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.432597 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.432616 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432634 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432651 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.432670 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.432690 | orchestrator | 2026-01-03 00:29:26.432711 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-03 00:29:26.432723 | orchestrator | Saturday 03 January 2026 00:27:55 +0000 (0:00:39.849) 0:01:51.086 ****** 2026-01-03 00:29:26.432733 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:26.432744 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:26.432754 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:26.432777 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:26.432788 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:26.432798 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:26.432809 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:26.432820 | orchestrator | 2026-01-03 00:29:26.432830 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-03 00:29:26.432841 | orchestrator | Saturday 03 January 2026 00:29:11 +0000 (0:01:16.017) 0:03:07.104 ****** 2026-01-03 00:29:26.432852 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:26.432877 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432888 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.432898 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432909 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.432919 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.432930 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.432941 | orchestrator | 2026-01-03 00:29:26.432952 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-03 00:29:26.432963 | orchestrator | Saturday 03 January 2026 00:29:13 +0000 (0:00:01.992) 0:03:09.096 ****** 2026-01-03 00:29:26.432973 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:26.432984 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:26.432995 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:26.433005 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:26.433016 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:26.433026 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:26.433037 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:26.433047 | orchestrator | 2026-01-03 00:29:26.433058 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-03 00:29:26.433069 | orchestrator | Saturday 03 January 2026 00:29:25 +0000 (0:00:11.244) 0:03:20.341 ****** 2026-01-03 00:29:26.433112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-03 00:29:26.433130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-03 00:29:26.433145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-03 00:29:26.433158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-03 00:29:26.433169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-03 00:29:26.433188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-03 00:29:26.433199 | orchestrator | 2026-01-03 00:29:26.433214 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-03 00:29:26.433225 | orchestrator | Saturday 03 January 2026 00:29:25 +0000 (0:00:00.408) 0:03:20.750 ****** 2026-01-03 00:29:26.433236 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:29:26.433272 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:26.433285 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:29:26.433296 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:29:26.433306 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:26.433317 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:26.433327 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:29:26.433338 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:26.433349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:29:26.433360 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:29:26.433371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:29:26.433382 | orchestrator | 2026-01-03 00:29:26.433393 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-03 00:29:26.433403 | orchestrator | Saturday 03 January 2026 00:29:26 +0000 (0:00:00.717) 0:03:21.467 ****** 2026-01-03 00:29:26.433414 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:29:26.433426 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:29:26.433436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:29:26.433447 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:29:26.433465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:29:26.433485 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:29:36.422000 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:29:36.422131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:29:36.422142 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:29:36.422149 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:29:36.422155 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:29:36.422161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:29:36.422167 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:29:36.422173 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:29:36.422179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:29:36.422185 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:29:36.422208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:29:36.422214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:29:36.422221 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:36.422228 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:29:36.422233 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:29:36.422242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:29:36.422252 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:29:36.422339 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:29:36.422350 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:29:36.422359 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:29:36.422370 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:36.422379 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:29:36.422389 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:29:36.422398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:29:36.422408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:29:36.422418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:29:36.422427 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:29:36.422433 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:29:36.422442 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:36.422451 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:29:36.422460 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:29:36.422470 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:29:36.422479 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:29:36.422489 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:29:36.422498 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:29:36.422522 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:29:36.422533 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:29:36.422543 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:36.422554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:29:36.422564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:29:36.422574 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:29:36.422584 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:29:36.422596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:29:36.422618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:29:36.422639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:29:36.422651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:29:36.422671 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:29:36.422685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:29:36.422691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422698 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:29:36.422705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:29:36.422711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:29:36.422718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422724 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:29:36.422731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:29:36.422738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:29:36.422744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:29:36.422751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:29:36.422757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:29:36.422764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:29:36.422770 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:29:36.422777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:29:36.422784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:29:36.422791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:29:36.422797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:29:36.422804 | orchestrator | 2026-01-03 00:29:36.422812 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-03 00:29:36.422819 | orchestrator | Saturday 03 January 2026 00:29:33 +0000 (0:00:06.898) 0:03:28.366 ****** 2026-01-03 00:29:36.422825 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422832 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422858 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:29:36.422875 | orchestrator | 2026-01-03 00:29:36.422882 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-03 00:29:36.422889 | orchestrator | Saturday 03 January 2026 00:29:34 +0000 (0:00:01.547) 0:03:29.913 ****** 2026-01-03 00:29:36.422901 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:36.422907 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:36.422914 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:36.422920 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:29:36.422926 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:36.422932 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:29:36.422938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:36.422943 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:29:36.422949 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:36.422955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:36.422966 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:50.321030 | orchestrator | 2026-01-03 00:29:50.321131 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-03 00:29:50.321144 | orchestrator | Saturday 03 January 2026 00:29:36 +0000 (0:00:01.620) 0:03:31.534 ****** 2026-01-03 00:29:50.321150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:50.321158 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:50.321166 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:50.321172 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:50.321178 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:50.321185 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:50.321193 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:29:50.321200 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:50.321208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:50.321215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:50.321221 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:29:50.321227 | orchestrator | 2026-01-03 00:29:50.321233 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-03 00:29:50.321240 | orchestrator | Saturday 03 January 2026 00:29:37 +0000 (0:00:00.617) 0:03:32.152 ****** 2026-01-03 00:29:50.321246 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:29:50.321251 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:50.321257 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:29:50.321307 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:29:50.321317 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:29:50.321324 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:29:50.321332 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:29:50.321339 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:29:50.321347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:29:50.321373 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:29:50.321380 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:29:50.321386 | orchestrator | 2026-01-03 00:29:50.321393 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-03 00:29:50.321399 | orchestrator | Saturday 03 January 2026 00:29:38 +0000 (0:00:01.636) 0:03:33.788 ****** 2026-01-03 00:29:50.321406 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:50.321413 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:50.321420 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:50.321428 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:50.321435 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:29:50.321441 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:29:50.321447 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:29:50.321453 | orchestrator | 2026-01-03 00:29:50.321460 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-03 00:29:50.321466 | orchestrator | Saturday 03 January 2026 00:29:38 +0000 (0:00:00.303) 0:03:34.092 ****** 2026-01-03 00:29:50.321472 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:50.321479 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:50.321485 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:50.321491 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:50.321498 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:50.321505 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:50.321512 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:50.321519 | orchestrator | 2026-01-03 00:29:50.321526 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-03 00:29:50.321534 | orchestrator | Saturday 03 January 2026 00:29:44 +0000 (0:00:05.111) 0:03:39.203 ****** 2026-01-03 00:29:50.321541 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-03 00:29:50.321548 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-03 00:29:50.321554 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:50.321559 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-03 00:29:50.321565 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:50.321570 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-03 00:29:50.321575 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:50.321581 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-03 00:29:50.321588 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:50.321594 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:29:50.321601 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-03 00:29:50.321606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:29:50.321612 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-03 00:29:50.321618 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:29:50.321623 | orchestrator | 2026-01-03 00:29:50.321630 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-03 00:29:50.321636 | orchestrator | Saturday 03 January 2026 00:29:44 +0000 (0:00:00.280) 0:03:39.484 ****** 2026-01-03 00:29:50.321643 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-03 00:29:50.321650 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-03 00:29:50.321657 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-03 00:29:50.321681 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-03 00:29:50.321689 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-03 00:29:50.321696 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-03 00:29:50.321703 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-03 00:29:50.321709 | orchestrator | 2026-01-03 00:29:50.321716 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-03 00:29:50.321723 | orchestrator | Saturday 03 January 2026 00:29:45 +0000 (0:00:01.111) 0:03:40.596 ****** 2026-01-03 00:29:50.321732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:29:50.321748 | orchestrator | 2026-01-03 00:29:50.321756 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-03 00:29:50.321763 | orchestrator | Saturday 03 January 2026 00:29:45 +0000 (0:00:00.493) 0:03:41.090 ****** 2026-01-03 00:29:50.321771 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:50.321779 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:50.321785 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:50.321793 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:50.321800 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:50.321807 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:50.321814 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:50.321822 | orchestrator | 2026-01-03 00:29:50.321830 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-03 00:29:50.321837 | orchestrator | Saturday 03 January 2026 00:29:47 +0000 (0:00:01.401) 0:03:42.491 ****** 2026-01-03 00:29:50.321843 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:50.321850 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:50.321857 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:50.321865 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:50.321872 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:50.321880 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:50.321887 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:50.321895 | orchestrator | 2026-01-03 00:29:50.321902 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-03 00:29:50.321908 | orchestrator | Saturday 03 January 2026 00:29:47 +0000 (0:00:00.611) 0:03:43.102 ****** 2026-01-03 00:29:50.321916 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:50.321924 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:50.321931 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:50.321939 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:50.321946 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:50.321953 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:50.321959 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:50.321965 | orchestrator | 2026-01-03 00:29:50.321971 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-03 00:29:50.321977 | orchestrator | Saturday 03 January 2026 00:29:48 +0000 (0:00:00.723) 0:03:43.825 ****** 2026-01-03 00:29:50.321983 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:50.321989 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:50.321995 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:50.322001 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:50.322007 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:50.322013 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:50.322075 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:50.322081 | orchestrator | 2026-01-03 00:29:50.322087 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-03 00:29:50.322093 | orchestrator | Saturday 03 January 2026 00:29:49 +0000 (0:00:00.597) 0:03:44.423 ****** 2026-01-03 00:29:50.322117 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398751.9795027, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:50.322130 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398764.4635954, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:50.322142 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398751.4054775, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:50.322167 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398753.603613, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514520 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398751.7791286, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514632 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398754.4945142, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514667 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398755.5031343, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514689 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514733 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514774 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514786 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514874 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514888 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514900 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:29:55.514912 | orchestrator | 2026-01-03 00:29:55.514925 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-03 00:29:55.514938 | orchestrator | Saturday 03 January 2026 00:29:50 +0000 (0:00:01.009) 0:03:45.433 ****** 2026-01-03 00:29:55.514953 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:55.514973 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:55.514991 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:55.515011 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:55.515031 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:55.515051 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:55.515064 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:55.515077 | orchestrator | 2026-01-03 00:29:55.515091 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-03 00:29:55.515114 | orchestrator | Saturday 03 January 2026 00:29:51 +0000 (0:00:01.193) 0:03:46.626 ****** 2026-01-03 00:29:55.515125 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:55.515141 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:55.515164 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:55.515191 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:55.515207 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:55.515224 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:55.515241 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:55.515260 | orchestrator | 2026-01-03 00:29:55.515325 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-03 00:29:55.515346 | orchestrator | Saturday 03 January 2026 00:29:52 +0000 (0:00:01.186) 0:03:47.813 ****** 2026-01-03 00:29:55.515388 | orchestrator | changed: [testbed-manager] 2026-01-03 00:29:55.515408 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:29:55.515429 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:29:55.515447 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:29:55.515465 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:29:55.515482 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:29:55.515500 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:29:55.515512 | orchestrator | 2026-01-03 00:29:55.515523 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-03 00:29:55.515534 | orchestrator | Saturday 03 January 2026 00:29:53 +0000 (0:00:01.288) 0:03:49.101 ****** 2026-01-03 00:29:55.515545 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:29:55.515556 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:29:55.515567 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:29:55.515578 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:29:55.515589 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:29:55.515599 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:29:55.515610 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:29:55.515621 | orchestrator | 2026-01-03 00:29:55.515632 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-03 00:29:55.515643 | orchestrator | Saturday 03 January 2026 00:29:54 +0000 (0:00:00.350) 0:03:49.452 ****** 2026-01-03 00:29:55.515654 | orchestrator | ok: [testbed-manager] 2026-01-03 00:29:55.515666 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:29:55.515677 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:29:55.515687 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:29:55.515698 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:29:55.515709 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:29:55.515720 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:29:55.515731 | orchestrator | 2026-01-03 00:29:55.515742 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-03 00:29:55.515753 | orchestrator | Saturday 03 January 2026 00:29:55 +0000 (0:00:00.773) 0:03:50.225 ****** 2026-01-03 00:29:55.515765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:29:55.515778 | orchestrator | 2026-01-03 00:29:55.515789 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-03 00:29:55.515811 | orchestrator | Saturday 03 January 2026 00:29:55 +0000 (0:00:00.406) 0:03:50.631 ****** 2026-01-03 00:31:13.794323 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794433 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:13.794446 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:13.794455 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:13.794464 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:13.794472 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:13.794480 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:13.794488 | orchestrator | 2026-01-03 00:31:13.794497 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-03 00:31:13.794526 | orchestrator | Saturday 03 January 2026 00:30:03 +0000 (0:00:08.409) 0:03:59.041 ****** 2026-01-03 00:31:13.794532 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794538 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794544 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794551 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794559 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794567 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794575 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794583 | orchestrator | 2026-01-03 00:31:13.794592 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-03 00:31:13.794600 | orchestrator | Saturday 03 January 2026 00:30:05 +0000 (0:00:01.292) 0:04:00.334 ****** 2026-01-03 00:31:13.794608 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794616 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794623 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794631 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794639 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794647 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794654 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794661 | orchestrator | 2026-01-03 00:31:13.794669 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-03 00:31:13.794676 | orchestrator | Saturday 03 January 2026 00:30:06 +0000 (0:00:01.259) 0:04:01.593 ****** 2026-01-03 00:31:13.794683 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794691 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794698 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794705 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794713 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794721 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794729 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794736 | orchestrator | 2026-01-03 00:31:13.794743 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-03 00:31:13.794751 | orchestrator | Saturday 03 January 2026 00:30:06 +0000 (0:00:00.315) 0:04:01.909 ****** 2026-01-03 00:31:13.794759 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794767 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794774 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794782 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794790 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794796 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794804 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794812 | orchestrator | 2026-01-03 00:31:13.794818 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-03 00:31:13.794823 | orchestrator | Saturday 03 January 2026 00:30:07 +0000 (0:00:00.344) 0:04:02.253 ****** 2026-01-03 00:31:13.794829 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794834 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794839 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794845 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794850 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794856 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794862 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794867 | orchestrator | 2026-01-03 00:31:13.794873 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-03 00:31:13.794878 | orchestrator | Saturday 03 January 2026 00:30:07 +0000 (0:00:00.304) 0:04:02.558 ****** 2026-01-03 00:31:13.794884 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.794890 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.794895 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.794900 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.794906 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.794912 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.794917 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.794923 | orchestrator | 2026-01-03 00:31:13.794943 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-03 00:31:13.794955 | orchestrator | Saturday 03 January 2026 00:30:12 +0000 (0:00:04.740) 0:04:07.299 ****** 2026-01-03 00:31:13.794963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:13.794971 | orchestrator | 2026-01-03 00:31:13.794977 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-03 00:31:13.794985 | orchestrator | Saturday 03 January 2026 00:30:12 +0000 (0:00:00.354) 0:04:07.653 ****** 2026-01-03 00:31:13.794994 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795002 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-03 00:31:13.795011 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795019 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:13.795026 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-03 00:31:13.795034 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795042 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-03 00:31:13.795051 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:13.795059 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795067 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:13.795075 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-03 00:31:13.795084 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795093 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-03 00:31:13.795101 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:13.795109 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795117 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-03 00:31:13.795143 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:13.795152 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:13.795160 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-03 00:31:13.795168 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-03 00:31:13.795177 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:13.795185 | orchestrator | 2026-01-03 00:31:13.795193 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-03 00:31:13.795201 | orchestrator | Saturday 03 January 2026 00:30:12 +0000 (0:00:00.337) 0:04:07.991 ****** 2026-01-03 00:31:13.795211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:13.795219 | orchestrator | 2026-01-03 00:31:13.795226 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-03 00:31:13.795234 | orchestrator | Saturday 03 January 2026 00:30:13 +0000 (0:00:00.392) 0:04:08.383 ****** 2026-01-03 00:31:13.795241 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-03 00:31:13.795249 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-03 00:31:13.795256 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:13.795264 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-03 00:31:13.795272 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:13.795279 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-03 00:31:13.795287 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:13.795343 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:13.795352 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-03 00:31:13.795360 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-03 00:31:13.795368 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:13.795385 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:13.795393 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-03 00:31:13.795400 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:13.795419 | orchestrator | 2026-01-03 00:31:13.795427 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-03 00:31:13.795443 | orchestrator | Saturday 03 January 2026 00:30:13 +0000 (0:00:00.317) 0:04:08.701 ****** 2026-01-03 00:31:13.795451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:13.795459 | orchestrator | 2026-01-03 00:31:13.795466 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-03 00:31:13.795473 | orchestrator | Saturday 03 January 2026 00:30:14 +0000 (0:00:00.433) 0:04:09.134 ****** 2026-01-03 00:31:13.795481 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:13.795489 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:13.795497 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:13.795504 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:13.795512 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:13.795519 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:13.795526 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:13.795532 | orchestrator | 2026-01-03 00:31:13.795537 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-03 00:31:13.795543 | orchestrator | Saturday 03 January 2026 00:30:47 +0000 (0:00:33.087) 0:04:42.222 ****** 2026-01-03 00:31:13.795551 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:13.795558 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:13.795565 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:13.795573 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:13.795580 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:13.795598 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:13.795607 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:13.795614 | orchestrator | 2026-01-03 00:31:13.795621 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-03 00:31:13.795629 | orchestrator | Saturday 03 January 2026 00:30:56 +0000 (0:00:08.937) 0:04:51.159 ****** 2026-01-03 00:31:13.795636 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:13.795643 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:13.795651 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:13.795659 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:13.795666 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:13.795673 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:13.795680 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:13.795688 | orchestrator | 2026-01-03 00:31:13.795695 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-03 00:31:13.795703 | orchestrator | Saturday 03 January 2026 00:31:04 +0000 (0:00:08.609) 0:04:59.769 ****** 2026-01-03 00:31:13.795711 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:13.795718 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:13.795725 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:13.795733 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:13.795740 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:13.795748 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:13.795755 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:13.795762 | orchestrator | 2026-01-03 00:31:13.795770 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-03 00:31:13.795778 | orchestrator | Saturday 03 January 2026 00:31:06 +0000 (0:00:01.954) 0:05:01.723 ****** 2026-01-03 00:31:13.795785 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:13.795793 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:13.795801 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:13.795808 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:13.795822 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:13.795829 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:13.795837 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:13.795844 | orchestrator | 2026-01-03 00:31:13.795862 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-03 00:31:25.633759 | orchestrator | Saturday 03 January 2026 00:31:13 +0000 (0:00:07.175) 0:05:08.899 ****** 2026-01-03 00:31:25.633882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:25.633902 | orchestrator | 2026-01-03 00:31:25.633914 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-03 00:31:25.633926 | orchestrator | Saturday 03 January 2026 00:31:14 +0000 (0:00:00.536) 0:05:09.435 ****** 2026-01-03 00:31:25.633938 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:25.633951 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:25.633962 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:25.633972 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:25.633983 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:25.633994 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:25.634005 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:25.634074 | orchestrator | 2026-01-03 00:31:25.634088 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-03 00:31:25.634099 | orchestrator | Saturday 03 January 2026 00:31:15 +0000 (0:00:00.773) 0:05:10.209 ****** 2026-01-03 00:31:25.634110 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:25.634122 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:25.634133 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:25.634144 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:25.634154 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:25.634165 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:25.634176 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:25.634186 | orchestrator | 2026-01-03 00:31:25.634197 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-03 00:31:25.634208 | orchestrator | Saturday 03 January 2026 00:31:16 +0000 (0:00:01.802) 0:05:12.011 ****** 2026-01-03 00:31:25.634219 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:25.634230 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:25.634241 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:25.634251 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:25.634262 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:25.634273 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:25.634284 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:25.634319 | orchestrator | 2026-01-03 00:31:25.634332 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-03 00:31:25.634344 | orchestrator | Saturday 03 January 2026 00:31:17 +0000 (0:00:00.843) 0:05:12.855 ****** 2026-01-03 00:31:25.634357 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.634370 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.634382 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.634395 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:25.634407 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:25.634419 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:25.634432 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:25.634445 | orchestrator | 2026-01-03 00:31:25.634458 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-03 00:31:25.634470 | orchestrator | Saturday 03 January 2026 00:31:18 +0000 (0:00:00.286) 0:05:13.141 ****** 2026-01-03 00:31:25.634483 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.634494 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.634504 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.634515 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:25.634526 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:25.634561 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:25.634572 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:25.634583 | orchestrator | 2026-01-03 00:31:25.634594 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-03 00:31:25.634605 | orchestrator | Saturday 03 January 2026 00:31:18 +0000 (0:00:00.362) 0:05:13.504 ****** 2026-01-03 00:31:25.634615 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:25.634641 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:25.634652 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:25.634663 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:25.634674 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:25.634684 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:25.634695 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:25.634706 | orchestrator | 2026-01-03 00:31:25.634716 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-03 00:31:25.634727 | orchestrator | Saturday 03 January 2026 00:31:18 +0000 (0:00:00.275) 0:05:13.779 ****** 2026-01-03 00:31:25.634738 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.634749 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.634759 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.634770 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:25.634780 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:25.634791 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:25.634801 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:25.634812 | orchestrator | 2026-01-03 00:31:25.634823 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-03 00:31:25.634834 | orchestrator | Saturday 03 January 2026 00:31:18 +0000 (0:00:00.264) 0:05:14.044 ****** 2026-01-03 00:31:25.634845 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:25.634856 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:25.634866 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:25.634877 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:25.634888 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:25.634898 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:25.634909 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:25.634919 | orchestrator | 2026-01-03 00:31:25.634930 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-03 00:31:25.634941 | orchestrator | Saturday 03 January 2026 00:31:19 +0000 (0:00:00.320) 0:05:14.365 ****** 2026-01-03 00:31:25.634952 | orchestrator | ok: [testbed-manager] =>  2026-01-03 00:31:25.634962 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.634973 | orchestrator | ok: [testbed-node-3] =>  2026-01-03 00:31:25.634984 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.634994 | orchestrator | ok: [testbed-node-4] =>  2026-01-03 00:31:25.635005 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.635016 | orchestrator | ok: [testbed-node-5] =>  2026-01-03 00:31:25.635026 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.635055 | orchestrator | ok: [testbed-node-0] =>  2026-01-03 00:31:25.635066 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.635077 | orchestrator | ok: [testbed-node-1] =>  2026-01-03 00:31:25.635088 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.635099 | orchestrator | ok: [testbed-node-2] =>  2026-01-03 00:31:25.635109 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:31:25.635120 | orchestrator | 2026-01-03 00:31:25.635131 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-03 00:31:25.635142 | orchestrator | Saturday 03 January 2026 00:31:19 +0000 (0:00:00.246) 0:05:14.611 ****** 2026-01-03 00:31:25.635152 | orchestrator | ok: [testbed-manager] =>  2026-01-03 00:31:25.635163 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635174 | orchestrator | ok: [testbed-node-3] =>  2026-01-03 00:31:25.635184 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635195 | orchestrator | ok: [testbed-node-4] =>  2026-01-03 00:31:25.635205 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635224 | orchestrator | ok: [testbed-node-5] =>  2026-01-03 00:31:25.635235 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635246 | orchestrator | ok: [testbed-node-0] =>  2026-01-03 00:31:25.635256 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635267 | orchestrator | ok: [testbed-node-1] =>  2026-01-03 00:31:25.635278 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635288 | orchestrator | ok: [testbed-node-2] =>  2026-01-03 00:31:25.635317 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:31:25.635328 | orchestrator | 2026-01-03 00:31:25.635339 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-03 00:31:25.635350 | orchestrator | Saturday 03 January 2026 00:31:19 +0000 (0:00:00.280) 0:05:14.892 ****** 2026-01-03 00:31:25.635361 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.635372 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.635382 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.635393 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:25.635404 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:25.635414 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:25.635425 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:25.635436 | orchestrator | 2026-01-03 00:31:25.635447 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-03 00:31:25.635457 | orchestrator | Saturday 03 January 2026 00:31:20 +0000 (0:00:00.296) 0:05:15.189 ****** 2026-01-03 00:31:25.635468 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.635479 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.635489 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.635500 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:25.635511 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:25.635522 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:25.635532 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:25.635543 | orchestrator | 2026-01-03 00:31:25.635554 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-03 00:31:25.635565 | orchestrator | Saturday 03 January 2026 00:31:20 +0000 (0:00:00.250) 0:05:15.439 ****** 2026-01-03 00:31:25.635577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:25.635590 | orchestrator | 2026-01-03 00:31:25.635601 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-03 00:31:25.635612 | orchestrator | Saturday 03 January 2026 00:31:20 +0000 (0:00:00.412) 0:05:15.852 ****** 2026-01-03 00:31:25.635622 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:25.635633 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:25.635644 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:25.635655 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:25.635666 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:25.635676 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:25.635687 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:25.635697 | orchestrator | 2026-01-03 00:31:25.635714 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-03 00:31:25.635725 | orchestrator | Saturday 03 January 2026 00:31:21 +0000 (0:00:01.045) 0:05:16.898 ****** 2026-01-03 00:31:25.635736 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:25.635747 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:25.635758 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:25.635768 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:25.635779 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:25.635790 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:25.635801 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:25.635812 | orchestrator | 2026-01-03 00:31:25.635823 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-03 00:31:25.635834 | orchestrator | Saturday 03 January 2026 00:31:25 +0000 (0:00:03.485) 0:05:20.384 ****** 2026-01-03 00:31:25.635853 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-03 00:31:25.635865 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-03 00:31:25.635876 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-03 00:31:25.635887 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-03 00:31:25.635898 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-03 00:31:25.635908 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-03 00:31:25.635919 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:25.635930 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-03 00:31:25.635941 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-03 00:31:25.635952 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:25.635963 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-03 00:31:25.635974 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-03 00:31:25.635984 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-03 00:31:25.635995 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-03 00:31:25.636006 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:25.636017 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-03 00:31:25.636035 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-03 00:32:28.332376 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-03 00:32:28.332519 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:28.332550 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-03 00:32:28.332570 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-03 00:32:28.332590 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-03 00:32:28.332609 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:28.332629 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:28.332647 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-03 00:32:28.332665 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-03 00:32:28.332683 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-03 00:32:28.332702 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:28.332721 | orchestrator | 2026-01-03 00:32:28.332740 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-03 00:32:28.332760 | orchestrator | Saturday 03 January 2026 00:31:25 +0000 (0:00:00.560) 0:05:20.944 ****** 2026-01-03 00:32:28.332779 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.332797 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.332816 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.332833 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.332852 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.332871 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.332891 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.332910 | orchestrator | 2026-01-03 00:32:28.332929 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-03 00:32:28.332950 | orchestrator | Saturday 03 January 2026 00:31:33 +0000 (0:00:07.373) 0:05:28.318 ****** 2026-01-03 00:32:28.332968 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.332990 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.333008 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.333030 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.333048 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.333066 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.333085 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.333104 | orchestrator | 2026-01-03 00:32:28.333124 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-03 00:32:28.333144 | orchestrator | Saturday 03 January 2026 00:31:34 +0000 (0:00:01.052) 0:05:29.371 ****** 2026-01-03 00:32:28.333161 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.333215 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.333235 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.333254 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.333332 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.333351 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.333369 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.333385 | orchestrator | 2026-01-03 00:32:28.333404 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-03 00:32:28.333423 | orchestrator | Saturday 03 January 2026 00:31:42 +0000 (0:00:08.718) 0:05:38.089 ****** 2026-01-03 00:32:28.333441 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:28.333458 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.333475 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.333493 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.333511 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.333528 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.333545 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.333564 | orchestrator | 2026-01-03 00:32:28.333581 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-03 00:32:28.333599 | orchestrator | Saturday 03 January 2026 00:31:46 +0000 (0:00:03.241) 0:05:41.330 ****** 2026-01-03 00:32:28.333616 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.333633 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.333651 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.333669 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.333688 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.333730 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.333751 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.333770 | orchestrator | 2026-01-03 00:32:28.333787 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-03 00:32:28.333804 | orchestrator | Saturday 03 January 2026 00:31:47 +0000 (0:00:01.318) 0:05:42.649 ****** 2026-01-03 00:32:28.333821 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.333838 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.333855 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.333873 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.333891 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.333910 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.333929 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.333948 | orchestrator | 2026-01-03 00:32:28.333967 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-03 00:32:28.333986 | orchestrator | Saturday 03 January 2026 00:31:49 +0000 (0:00:01.526) 0:05:44.175 ****** 2026-01-03 00:32:28.334004 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:28.334120 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:28.334143 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:28.334163 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:28.334182 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:28.334216 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:28.334235 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:28.334253 | orchestrator | 2026-01-03 00:32:28.334301 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-03 00:32:28.334322 | orchestrator | Saturday 03 January 2026 00:31:49 +0000 (0:00:00.583) 0:05:44.759 ****** 2026-01-03 00:32:28.334341 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.334360 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.334378 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.334397 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.334415 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.334427 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.334437 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.334448 | orchestrator | 2026-01-03 00:32:28.334459 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-03 00:32:28.334526 | orchestrator | Saturday 03 January 2026 00:32:00 +0000 (0:00:10.501) 0:05:55.260 ****** 2026-01-03 00:32:28.334546 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:28.334564 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.334581 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.334599 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.334616 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.334634 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.334651 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.334669 | orchestrator | 2026-01-03 00:32:28.334688 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-03 00:32:28.334707 | orchestrator | Saturday 03 January 2026 00:32:01 +0000 (0:00:00.925) 0:05:56.185 ****** 2026-01-03 00:32:28.334726 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.334744 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.334764 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.334782 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.334802 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.334814 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.334825 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.334835 | orchestrator | 2026-01-03 00:32:28.334846 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-03 00:32:28.334857 | orchestrator | Saturday 03 January 2026 00:32:10 +0000 (0:00:09.309) 0:06:05.495 ****** 2026-01-03 00:32:28.334868 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.334878 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.334889 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.334899 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.334910 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.334921 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.334932 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.334942 | orchestrator | 2026-01-03 00:32:28.334961 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-03 00:32:28.334980 | orchestrator | Saturday 03 January 2026 00:32:21 +0000 (0:00:11.118) 0:06:16.613 ****** 2026-01-03 00:32:28.334998 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-03 00:32:28.335016 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-03 00:32:28.335033 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-03 00:32:28.335051 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-03 00:32:28.335068 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-03 00:32:28.335088 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-03 00:32:28.335106 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-03 00:32:28.335125 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-03 00:32:28.335136 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-03 00:32:28.335147 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-03 00:32:28.335158 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-03 00:32:28.335169 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-03 00:32:28.335180 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-03 00:32:28.335190 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-03 00:32:28.335201 | orchestrator | 2026-01-03 00:32:28.335212 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-03 00:32:28.335223 | orchestrator | Saturday 03 January 2026 00:32:22 +0000 (0:00:01.211) 0:06:17.824 ****** 2026-01-03 00:32:28.335234 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:28.335245 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:28.335255 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:28.335298 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:28.335310 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:28.335321 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:28.335344 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:28.335355 | orchestrator | 2026-01-03 00:32:28.335366 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-03 00:32:28.335377 | orchestrator | Saturday 03 January 2026 00:32:23 +0000 (0:00:00.509) 0:06:18.334 ****** 2026-01-03 00:32:28.335388 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:28.335399 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:28.335409 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:28.335420 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:28.335431 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:28.335442 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:28.335452 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:28.335463 | orchestrator | 2026-01-03 00:32:28.335474 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-03 00:32:28.335487 | orchestrator | Saturday 03 January 2026 00:32:27 +0000 (0:00:04.250) 0:06:22.584 ****** 2026-01-03 00:32:28.335498 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:28.335508 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:28.335519 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:28.335530 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:28.335541 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:28.335557 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:28.335575 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:28.335593 | orchestrator | 2026-01-03 00:32:28.335611 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-03 00:32:28.335628 | orchestrator | Saturday 03 January 2026 00:32:27 +0000 (0:00:00.457) 0:06:23.042 ****** 2026-01-03 00:32:28.335645 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-03 00:32:28.335663 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-03 00:32:28.335680 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:28.335696 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-03 00:32:28.335713 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-03 00:32:28.335730 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:28.335747 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-03 00:32:28.335764 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-03 00:32:28.335781 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:28.335816 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-03 00:32:47.437062 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-03 00:32:47.437184 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:47.437201 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-03 00:32:47.437214 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-03 00:32:47.437225 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:47.437236 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-03 00:32:47.437247 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-03 00:32:47.437292 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:47.437303 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-03 00:32:47.437315 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-03 00:32:47.437326 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:47.437337 | orchestrator | 2026-01-03 00:32:47.437350 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-03 00:32:47.437362 | orchestrator | Saturday 03 January 2026 00:32:28 +0000 (0:00:00.649) 0:06:23.691 ****** 2026-01-03 00:32:47.437374 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:47.437385 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:47.437396 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:47.437406 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:47.437441 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:47.437452 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:47.437464 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:47.437482 | orchestrator | 2026-01-03 00:32:47.437562 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-03 00:32:47.437585 | orchestrator | Saturday 03 January 2026 00:32:29 +0000 (0:00:00.478) 0:06:24.169 ****** 2026-01-03 00:32:47.437605 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:47.437625 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:47.437638 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:47.437651 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:47.437664 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:47.437677 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:47.437694 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:47.437716 | orchestrator | 2026-01-03 00:32:47.437744 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-03 00:32:47.437762 | orchestrator | Saturday 03 January 2026 00:32:29 +0000 (0:00:00.470) 0:06:24.640 ****** 2026-01-03 00:32:47.437778 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:47.437795 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:47.437813 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:47.437830 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:47.437847 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:47.437864 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:47.437881 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:47.437900 | orchestrator | 2026-01-03 00:32:47.437920 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-03 00:32:47.437938 | orchestrator | Saturday 03 January 2026 00:32:30 +0000 (0:00:00.490) 0:06:25.130 ****** 2026-01-03 00:32:47.437954 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.437965 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.437976 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.437987 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.437998 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.438009 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.438084 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.438096 | orchestrator | 2026-01-03 00:32:47.438107 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-03 00:32:47.438118 | orchestrator | Saturday 03 January 2026 00:32:31 +0000 (0:00:01.987) 0:06:27.117 ****** 2026-01-03 00:32:47.438138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:32:47.438152 | orchestrator | 2026-01-03 00:32:47.438192 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-03 00:32:47.438214 | orchestrator | Saturday 03 January 2026 00:32:32 +0000 (0:00:00.807) 0:06:27.925 ****** 2026-01-03 00:32:47.438225 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.438236 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:47.438247 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:47.438321 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:47.438333 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:47.438343 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:47.438355 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:47.438366 | orchestrator | 2026-01-03 00:32:47.438376 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-03 00:32:47.438387 | orchestrator | Saturday 03 January 2026 00:32:33 +0000 (0:00:00.834) 0:06:28.760 ****** 2026-01-03 00:32:47.438398 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.438408 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:47.438419 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:47.438430 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:47.438454 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:47.438467 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:47.438486 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:47.438512 | orchestrator | 2026-01-03 00:32:47.438534 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-03 00:32:47.438552 | orchestrator | Saturday 03 January 2026 00:32:34 +0000 (0:00:00.833) 0:06:29.593 ****** 2026-01-03 00:32:47.438571 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.438590 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:47.438607 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:47.438626 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:47.438645 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:47.438663 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:47.438682 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:47.438695 | orchestrator | 2026-01-03 00:32:47.438706 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-03 00:32:47.438740 | orchestrator | Saturday 03 January 2026 00:32:36 +0000 (0:00:01.590) 0:06:31.184 ****** 2026-01-03 00:32:47.438751 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:47.438762 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.438773 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.438783 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.438794 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.438804 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.438815 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.438826 | orchestrator | 2026-01-03 00:32:47.438836 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-03 00:32:47.438972 | orchestrator | Saturday 03 January 2026 00:32:37 +0000 (0:00:01.431) 0:06:32.616 ****** 2026-01-03 00:32:47.438992 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.439009 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:47.439026 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:47.439043 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:47.439111 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:47.439134 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:47.439154 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:47.439171 | orchestrator | 2026-01-03 00:32:47.439191 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-03 00:32:47.439202 | orchestrator | Saturday 03 January 2026 00:32:38 +0000 (0:00:01.345) 0:06:33.961 ****** 2026-01-03 00:32:47.439213 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:47.439224 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:47.439235 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:47.439246 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:47.439320 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:47.439331 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:47.439342 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:47.439353 | orchestrator | 2026-01-03 00:32:47.439364 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-03 00:32:47.439374 | orchestrator | Saturday 03 January 2026 00:32:40 +0000 (0:00:01.368) 0:06:35.330 ****** 2026-01-03 00:32:47.439386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:32:47.439398 | orchestrator | 2026-01-03 00:32:47.439409 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-03 00:32:47.439419 | orchestrator | Saturday 03 January 2026 00:32:41 +0000 (0:00:00.969) 0:06:36.300 ****** 2026-01-03 00:32:47.439430 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.439441 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.439452 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.439462 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.439481 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.439508 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.439552 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.439572 | orchestrator | 2026-01-03 00:32:47.439591 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-03 00:32:47.439611 | orchestrator | Saturday 03 January 2026 00:32:42 +0000 (0:00:01.443) 0:06:37.743 ****** 2026-01-03 00:32:47.439622 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.439633 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.439644 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.439655 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.439665 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.439676 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.439687 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.439698 | orchestrator | 2026-01-03 00:32:47.439709 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-03 00:32:47.439720 | orchestrator | Saturday 03 January 2026 00:32:43 +0000 (0:00:01.224) 0:06:38.968 ****** 2026-01-03 00:32:47.439731 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.439741 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.439752 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.439763 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.439775 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.439785 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.439797 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.439807 | orchestrator | 2026-01-03 00:32:47.439819 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-03 00:32:47.439830 | orchestrator | Saturday 03 January 2026 00:32:45 +0000 (0:00:01.167) 0:06:40.135 ****** 2026-01-03 00:32:47.439841 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:47.439852 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:47.439862 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:47.439873 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:47.439884 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:47.439895 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:47.439905 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:47.439916 | orchestrator | 2026-01-03 00:32:47.439927 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-03 00:32:47.439939 | orchestrator | Saturday 03 January 2026 00:32:46 +0000 (0:00:01.291) 0:06:41.426 ****** 2026-01-03 00:32:47.439950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:32:47.439961 | orchestrator | 2026-01-03 00:32:47.439972 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:32:47.439983 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.836) 0:06:42.263 ****** 2026-01-03 00:32:47.439994 | orchestrator | 2026-01-03 00:32:47.440005 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:32:47.440016 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.038) 0:06:42.302 ****** 2026-01-03 00:32:47.440027 | orchestrator | 2026-01-03 00:32:47.440038 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:32:47.440048 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.038) 0:06:42.340 ****** 2026-01-03 00:32:47.440059 | orchestrator | 2026-01-03 00:32:47.440070 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:32:47.440096 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.044) 0:06:42.385 ****** 2026-01-03 00:33:13.689939 | orchestrator | 2026-01-03 00:33:13.690111 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:13.690137 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.038) 0:06:42.423 ****** 2026-01-03 00:33:13.690151 | orchestrator | 2026-01-03 00:33:13.690166 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:13.690186 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.037) 0:06:42.461 ****** 2026-01-03 00:33:13.691146 | orchestrator | 2026-01-03 00:33:13.691190 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:13.691199 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.044) 0:06:42.505 ****** 2026-01-03 00:33:13.691207 | orchestrator | 2026-01-03 00:33:13.691215 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:33:13.691258 | orchestrator | Saturday 03 January 2026 00:32:47 +0000 (0:00:00.038) 0:06:42.544 ****** 2026-01-03 00:33:13.691268 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:13.691276 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:13.691284 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:13.691292 | orchestrator | 2026-01-03 00:33:13.691300 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-03 00:33:13.691309 | orchestrator | Saturday 03 January 2026 00:32:48 +0000 (0:00:01.200) 0:06:43.744 ****** 2026-01-03 00:33:13.691317 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:13.691326 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:13.691334 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:13.691343 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:13.691357 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:13.691371 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:13.691384 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:13.691397 | orchestrator | 2026-01-03 00:33:13.691410 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-03 00:33:13.691423 | orchestrator | Saturday 03 January 2026 00:32:50 +0000 (0:00:01.696) 0:06:45.441 ****** 2026-01-03 00:33:13.691437 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:13.691451 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:13.691466 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:13.691479 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:13.691494 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:13.691509 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:13.691523 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:13.691537 | orchestrator | 2026-01-03 00:33:13.691552 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-03 00:33:13.691566 | orchestrator | Saturday 03 January 2026 00:32:51 +0000 (0:00:01.462) 0:06:46.904 ****** 2026-01-03 00:33:13.691580 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:13.691594 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:13.691609 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:13.691622 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:13.691636 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:13.691650 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:13.691664 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:13.691679 | orchestrator | 2026-01-03 00:33:13.691694 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-03 00:33:13.691709 | orchestrator | Saturday 03 January 2026 00:32:54 +0000 (0:00:02.224) 0:06:49.129 ****** 2026-01-03 00:33:13.691723 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:13.691739 | orchestrator | 2026-01-03 00:33:13.691752 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-03 00:33:13.691765 | orchestrator | Saturday 03 January 2026 00:32:54 +0000 (0:00:00.111) 0:06:49.240 ****** 2026-01-03 00:33:13.691779 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.691792 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:13.691806 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:13.691820 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:13.691833 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:13.691866 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:13.691881 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:13.691895 | orchestrator | 2026-01-03 00:33:13.691909 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-03 00:33:13.691925 | orchestrator | Saturday 03 January 2026 00:32:55 +0000 (0:00:01.109) 0:06:50.350 ****** 2026-01-03 00:33:13.691959 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:13.691973 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:13.691987 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:13.692001 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:13.692015 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:13.692029 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:13.692042 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:13.692056 | orchestrator | 2026-01-03 00:33:13.692069 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-03 00:33:13.692082 | orchestrator | Saturday 03 January 2026 00:32:55 +0000 (0:00:00.512) 0:06:50.862 ****** 2026-01-03 00:33:13.692098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:13.692115 | orchestrator | 2026-01-03 00:33:13.692128 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-03 00:33:13.692140 | orchestrator | Saturday 03 January 2026 00:32:56 +0000 (0:00:01.049) 0:06:51.912 ****** 2026-01-03 00:33:13.692149 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.692159 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:13.692173 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:13.692186 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:13.692199 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:13.692211 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:13.692249 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:13.692263 | orchestrator | 2026-01-03 00:33:13.692275 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-03 00:33:13.692288 | orchestrator | Saturday 03 January 2026 00:32:57 +0000 (0:00:00.878) 0:06:52.790 ****** 2026-01-03 00:33:13.692301 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-03 00:33:13.692342 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-03 00:33:13.692357 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-03 00:33:13.692370 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-03 00:33:13.692383 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-03 00:33:13.692396 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-03 00:33:13.692409 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-03 00:33:13.692422 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-03 00:33:13.692436 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-03 00:33:13.692448 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-03 00:33:13.692462 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-03 00:33:13.692475 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-03 00:33:13.692488 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-03 00:33:13.692502 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-03 00:33:13.692515 | orchestrator | 2026-01-03 00:33:13.692528 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-03 00:33:13.692542 | orchestrator | Saturday 03 January 2026 00:33:00 +0000 (0:00:02.524) 0:06:55.315 ****** 2026-01-03 00:33:13.692555 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:13.692568 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:13.692581 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:13.692594 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:13.692608 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:13.692620 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:13.692632 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:13.692645 | orchestrator | 2026-01-03 00:33:13.692657 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-03 00:33:13.692680 | orchestrator | Saturday 03 January 2026 00:33:00 +0000 (0:00:00.632) 0:06:55.947 ****** 2026-01-03 00:33:13.692694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:13.692708 | orchestrator | 2026-01-03 00:33:13.692720 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-03 00:33:13.692732 | orchestrator | Saturday 03 January 2026 00:33:01 +0000 (0:00:00.772) 0:06:56.720 ****** 2026-01-03 00:33:13.692744 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.692757 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:13.692769 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:13.692782 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:13.692796 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:13.692809 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:13.692823 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:13.692836 | orchestrator | 2026-01-03 00:33:13.692849 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-03 00:33:13.692862 | orchestrator | Saturday 03 January 2026 00:33:02 +0000 (0:00:00.847) 0:06:57.567 ****** 2026-01-03 00:33:13.692876 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.692889 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:13.692903 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:13.692916 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:13.692929 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:13.692942 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:13.692955 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:13.692969 | orchestrator | 2026-01-03 00:33:13.692983 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-03 00:33:13.693005 | orchestrator | Saturday 03 January 2026 00:33:03 +0000 (0:00:00.952) 0:06:58.520 ****** 2026-01-03 00:33:13.693019 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:13.693032 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:13.693045 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:13.693059 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:13.693072 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:13.693087 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:13.693099 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:13.693113 | orchestrator | 2026-01-03 00:33:13.693126 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-03 00:33:13.693139 | orchestrator | Saturday 03 January 2026 00:33:03 +0000 (0:00:00.470) 0:06:58.991 ****** 2026-01-03 00:33:13.693153 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.693167 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:13.693181 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:13.693194 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:13.693219 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:13.693249 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:13.693262 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:13.693275 | orchestrator | 2026-01-03 00:33:13.693288 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-03 00:33:13.693301 | orchestrator | Saturday 03 January 2026 00:33:05 +0000 (0:00:01.624) 0:07:00.615 ****** 2026-01-03 00:33:13.693314 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:13.693327 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:13.693339 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:13.693353 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:13.693366 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:13.693379 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:13.693393 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:13.693407 | orchestrator | 2026-01-03 00:33:13.693420 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-03 00:33:13.693434 | orchestrator | Saturday 03 January 2026 00:33:05 +0000 (0:00:00.465) 0:07:01.081 ****** 2026-01-03 00:33:13.693458 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:13.693473 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:13.693487 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:13.693500 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:13.693513 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:13.693526 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:13.693552 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:45.882698 | orchestrator | 2026-01-03 00:33:45.882816 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-03 00:33:45.882845 | orchestrator | Saturday 03 January 2026 00:33:13 +0000 (0:00:07.718) 0:07:08.800 ****** 2026-01-03 00:33:45.882867 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.882888 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:45.882910 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:45.882929 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:45.882949 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:45.882967 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:45.882987 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:45.883008 | orchestrator | 2026-01-03 00:33:45.883028 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-03 00:33:45.883046 | orchestrator | Saturday 03 January 2026 00:33:15 +0000 (0:00:01.559) 0:07:10.359 ****** 2026-01-03 00:33:45.883058 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.883069 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:45.883080 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:45.883091 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:45.883102 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:45.883114 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:45.883125 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:45.883136 | orchestrator | 2026-01-03 00:33:45.883147 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-03 00:33:45.883158 | orchestrator | Saturday 03 January 2026 00:33:16 +0000 (0:00:01.757) 0:07:12.116 ****** 2026-01-03 00:33:45.883169 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.883180 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:45.883246 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:45.883260 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:45.883273 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:45.883285 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:45.883298 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:45.883310 | orchestrator | 2026-01-03 00:33:45.883324 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:33:45.883336 | orchestrator | Saturday 03 January 2026 00:33:18 +0000 (0:00:01.641) 0:07:13.758 ****** 2026-01-03 00:33:45.883349 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.883362 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.883374 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.883387 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.883400 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.883413 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.883425 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.883437 | orchestrator | 2026-01-03 00:33:45.883451 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:33:45.883464 | orchestrator | Saturday 03 January 2026 00:33:19 +0000 (0:00:00.851) 0:07:14.609 ****** 2026-01-03 00:33:45.883477 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:45.883494 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:45.883514 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:45.883534 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:45.883554 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:45.883571 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:45.883589 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:45.883610 | orchestrator | 2026-01-03 00:33:45.883631 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-03 00:33:45.883681 | orchestrator | Saturday 03 January 2026 00:33:20 +0000 (0:00:00.917) 0:07:15.527 ****** 2026-01-03 00:33:45.883693 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:45.883704 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:45.883715 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:45.883726 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:45.883737 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:45.883747 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:45.883758 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:45.883769 | orchestrator | 2026-01-03 00:33:45.883780 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-03 00:33:45.883791 | orchestrator | Saturday 03 January 2026 00:33:20 +0000 (0:00:00.508) 0:07:16.036 ****** 2026-01-03 00:33:45.883802 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.883813 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.883824 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.883840 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.883858 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.883876 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.883894 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.883913 | orchestrator | 2026-01-03 00:33:45.883931 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-03 00:33:45.883949 | orchestrator | Saturday 03 January 2026 00:33:21 +0000 (0:00:00.492) 0:07:16.528 ****** 2026-01-03 00:33:45.883960 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.883971 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.883982 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.883993 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884004 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884014 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884025 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884036 | orchestrator | 2026-01-03 00:33:45.884047 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-03 00:33:45.884059 | orchestrator | Saturday 03 January 2026 00:33:21 +0000 (0:00:00.495) 0:07:17.023 ****** 2026-01-03 00:33:45.884070 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.884081 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.884092 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.884102 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884113 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884124 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884135 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884146 | orchestrator | 2026-01-03 00:33:45.884157 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-03 00:33:45.884168 | orchestrator | Saturday 03 January 2026 00:33:22 +0000 (0:00:00.651) 0:07:17.675 ****** 2026-01-03 00:33:45.884179 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.884214 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.884225 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.884236 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884247 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884258 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884268 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884279 | orchestrator | 2026-01-03 00:33:45.884310 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-03 00:33:45.884322 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:05.394) 0:07:23.069 ****** 2026-01-03 00:33:45.884333 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:45.884344 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:45.884354 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:45.884365 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:45.884376 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:45.884387 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:45.884397 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:45.884419 | orchestrator | 2026-01-03 00:33:45.884430 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-03 00:33:45.884441 | orchestrator | Saturday 03 January 2026 00:33:28 +0000 (0:00:00.516) 0:07:23.585 ****** 2026-01-03 00:33:45.884453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:45.884467 | orchestrator | 2026-01-03 00:33:45.884478 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-03 00:33:45.884489 | orchestrator | Saturday 03 January 2026 00:33:29 +0000 (0:00:00.931) 0:07:24.517 ****** 2026-01-03 00:33:45.884500 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.884511 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.884521 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.884532 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884543 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884553 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884564 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884575 | orchestrator | 2026-01-03 00:33:45.884586 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-03 00:33:45.884597 | orchestrator | Saturday 03 January 2026 00:33:31 +0000 (0:00:01.931) 0:07:26.449 ****** 2026-01-03 00:33:45.884607 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.884618 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.884629 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.884639 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884657 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884675 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884694 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884712 | orchestrator | 2026-01-03 00:33:45.884731 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-03 00:33:45.884747 | orchestrator | Saturday 03 January 2026 00:33:32 +0000 (0:00:01.111) 0:07:27.561 ****** 2026-01-03 00:33:45.884765 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:45.884781 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:45.884800 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:45.884819 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:45.884839 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:45.884878 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:45.884891 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:45.884902 | orchestrator | 2026-01-03 00:33:45.884913 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-03 00:33:45.884923 | orchestrator | Saturday 03 January 2026 00:33:33 +0000 (0:00:00.846) 0:07:28.407 ****** 2026-01-03 00:33:45.884935 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.884948 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.884959 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.884974 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.884986 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.884997 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.885007 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:33:45.885018 | orchestrator | 2026-01-03 00:33:45.885037 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-03 00:33:45.885048 | orchestrator | Saturday 03 January 2026 00:33:35 +0000 (0:00:01.882) 0:07:30.290 ****** 2026-01-03 00:33:45.885059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:45.885071 | orchestrator | 2026-01-03 00:33:45.885082 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-03 00:33:45.885092 | orchestrator | Saturday 03 January 2026 00:33:35 +0000 (0:00:00.812) 0:07:31.102 ****** 2026-01-03 00:33:45.885103 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:45.885114 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:45.885124 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:45.885135 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:45.885146 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:45.885157 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:45.885170 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:45.885215 | orchestrator | 2026-01-03 00:33:45.885246 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-03 00:34:17.444205 | orchestrator | Saturday 03 January 2026 00:33:45 +0000 (0:00:09.888) 0:07:40.991 ****** 2026-01-03 00:34:17.444297 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:17.444306 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:17.444312 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:17.444316 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:17.444321 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:17.444326 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:17.444330 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:17.444335 | orchestrator | 2026-01-03 00:34:17.444340 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-03 00:34:17.444345 | orchestrator | Saturday 03 January 2026 00:33:47 +0000 (0:00:01.988) 0:07:42.980 ****** 2026-01-03 00:34:17.444350 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:17.444354 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:17.444358 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:17.444363 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:17.444367 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:17.444372 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:17.444377 | orchestrator | 2026-01-03 00:34:17.444385 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-03 00:34:17.444392 | orchestrator | Saturday 03 January 2026 00:33:49 +0000 (0:00:01.360) 0:07:44.340 ****** 2026-01-03 00:34:17.444403 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444413 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444420 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444426 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444433 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444441 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444450 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444460 | orchestrator | 2026-01-03 00:34:17.444466 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-03 00:34:17.444474 | orchestrator | 2026-01-03 00:34:17.444481 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-03 00:34:17.444489 | orchestrator | Saturday 03 January 2026 00:33:51 +0000 (0:00:01.806) 0:07:46.147 ****** 2026-01-03 00:34:17.444496 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:17.444505 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:17.444511 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:17.444518 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:17.444525 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:17.444530 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:17.444534 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:17.444538 | orchestrator | 2026-01-03 00:34:17.444560 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-03 00:34:17.444564 | orchestrator | 2026-01-03 00:34:17.444569 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-03 00:34:17.444573 | orchestrator | Saturday 03 January 2026 00:33:51 +0000 (0:00:00.659) 0:07:46.806 ****** 2026-01-03 00:34:17.444578 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444582 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444586 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444591 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444595 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444600 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444604 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444608 | orchestrator | 2026-01-03 00:34:17.444613 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-03 00:34:17.444617 | orchestrator | Saturday 03 January 2026 00:33:53 +0000 (0:00:01.347) 0:07:48.153 ****** 2026-01-03 00:34:17.444621 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:17.444626 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:17.444630 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:17.444634 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:17.444638 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:17.444642 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:17.444647 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:17.444651 | orchestrator | 2026-01-03 00:34:17.444655 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-03 00:34:17.444670 | orchestrator | Saturday 03 January 2026 00:33:54 +0000 (0:00:01.512) 0:07:49.666 ****** 2026-01-03 00:34:17.444674 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:17.444679 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:17.444684 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:17.444689 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:17.444694 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:17.444699 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:17.444704 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:17.444709 | orchestrator | 2026-01-03 00:34:17.444714 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-03 00:34:17.444719 | orchestrator | Saturday 03 January 2026 00:33:55 +0000 (0:00:00.484) 0:07:50.150 ****** 2026-01-03 00:34:17.444725 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:17.444731 | orchestrator | 2026-01-03 00:34:17.444737 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-03 00:34:17.444742 | orchestrator | Saturday 03 January 2026 00:33:55 +0000 (0:00:00.935) 0:07:51.086 ****** 2026-01-03 00:34:17.444748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:17.444755 | orchestrator | 2026-01-03 00:34:17.444761 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-03 00:34:17.444766 | orchestrator | Saturday 03 January 2026 00:33:56 +0000 (0:00:00.759) 0:07:51.846 ****** 2026-01-03 00:34:17.444771 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444776 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444781 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444786 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444791 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444796 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444801 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444806 | orchestrator | 2026-01-03 00:34:17.444823 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-03 00:34:17.444829 | orchestrator | Saturday 03 January 2026 00:34:06 +0000 (0:00:09.321) 0:08:01.168 ****** 2026-01-03 00:34:17.444838 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444843 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444848 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444853 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444858 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444863 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444868 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444873 | orchestrator | 2026-01-03 00:34:17.444878 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-03 00:34:17.444883 | orchestrator | Saturday 03 January 2026 00:34:07 +0000 (0:00:01.010) 0:08:02.178 ****** 2026-01-03 00:34:17.444889 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444894 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444899 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444904 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444909 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444914 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444918 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444924 | orchestrator | 2026-01-03 00:34:17.444929 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-03 00:34:17.444933 | orchestrator | Saturday 03 January 2026 00:34:08 +0000 (0:00:01.341) 0:08:03.520 ****** 2026-01-03 00:34:17.444937 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444942 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444946 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444950 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444954 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.444959 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.444963 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.444967 | orchestrator | 2026-01-03 00:34:17.444971 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-03 00:34:17.444975 | orchestrator | Saturday 03 January 2026 00:34:10 +0000 (0:00:01.856) 0:08:05.376 ****** 2026-01-03 00:34:17.444980 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.444984 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.444988 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.444992 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.444996 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.445001 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.445005 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.445009 | orchestrator | 2026-01-03 00:34:17.445013 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-03 00:34:17.445018 | orchestrator | Saturday 03 January 2026 00:34:11 +0000 (0:00:01.268) 0:08:06.645 ****** 2026-01-03 00:34:17.445022 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.445026 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.445030 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.445035 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.445039 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.445043 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.445047 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.445052 | orchestrator | 2026-01-03 00:34:17.445056 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-03 00:34:17.445060 | orchestrator | 2026-01-03 00:34:17.445065 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-03 00:34:17.445069 | orchestrator | Saturday 03 January 2026 00:34:12 +0000 (0:00:01.133) 0:08:07.778 ****** 2026-01-03 00:34:17.445073 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:17.445078 | orchestrator | 2026-01-03 00:34:17.445082 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-03 00:34:17.445093 | orchestrator | Saturday 03 January 2026 00:34:13 +0000 (0:00:00.789) 0:08:08.567 ****** 2026-01-03 00:34:17.445098 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:17.445102 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:17.445106 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:17.445111 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:17.445115 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:17.445119 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:17.445123 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:17.445127 | orchestrator | 2026-01-03 00:34:17.445132 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-03 00:34:17.445136 | orchestrator | Saturday 03 January 2026 00:34:14 +0000 (0:00:01.020) 0:08:09.588 ****** 2026-01-03 00:34:17.445140 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:17.445145 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:17.445177 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:17.445182 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:17.445186 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:17.445191 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:17.445195 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:17.445199 | orchestrator | 2026-01-03 00:34:17.445203 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-03 00:34:17.445208 | orchestrator | Saturday 03 January 2026 00:34:15 +0000 (0:00:01.173) 0:08:10.762 ****** 2026-01-03 00:34:17.445212 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:17.445217 | orchestrator | 2026-01-03 00:34:17.445221 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-03 00:34:17.445225 | orchestrator | Saturday 03 January 2026 00:34:16 +0000 (0:00:00.955) 0:08:11.717 ****** 2026-01-03 00:34:17.445229 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:17.445234 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:17.445238 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:17.445242 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:17.445246 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:17.445251 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:17.445255 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:17.445259 | orchestrator | 2026-01-03 00:34:17.445267 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-03 00:34:18.953493 | orchestrator | Saturday 03 January 2026 00:34:17 +0000 (0:00:00.837) 0:08:12.555 ****** 2026-01-03 00:34:18.953616 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:18.953640 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:18.953657 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:18.953675 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:18.953691 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:18.953706 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:18.953723 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:18.953740 | orchestrator | 2026-01-03 00:34:18.953759 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:34:18.953777 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-03 00:34:18.953796 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:18.953813 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:18.953830 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:18.953848 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-03 00:34:18.953896 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-03 00:34:18.953913 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-03 00:34:18.953930 | orchestrator | 2026-01-03 00:34:18.953949 | orchestrator | 2026-01-03 00:34:18.953966 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:34:18.953982 | orchestrator | Saturday 03 January 2026 00:34:18 +0000 (0:00:01.100) 0:08:13.656 ****** 2026-01-03 00:34:18.953993 | orchestrator | =============================================================================== 2026-01-03 00:34:18.954003 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.02s 2026-01-03 00:34:18.954013 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.85s 2026-01-03 00:34:18.954113 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.09s 2026-01-03 00:34:18.954132 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.92s 2026-01-03 00:34:18.954173 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.24s 2026-01-03 00:34:18.954192 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.12s 2026-01-03 00:34:18.954209 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.75s 2026-01-03 00:34:18.954225 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.50s 2026-01-03 00:34:18.954250 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.89s 2026-01-03 00:34:18.954286 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.32s 2026-01-03 00:34:18.954303 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.31s 2026-01-03 00:34:18.954320 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.94s 2026-01-03 00:34:18.954336 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.72s 2026-01-03 00:34:18.954347 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.61s 2026-01-03 00:34:18.954357 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.41s 2026-01-03 00:34:18.954367 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.72s 2026-01-03 00:34:18.954377 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.37s 2026-01-03 00:34:18.954386 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.18s 2026-01-03 00:34:18.954396 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.90s 2026-01-03 00:34:18.954405 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.39s 2026-01-03 00:34:19.228071 | orchestrator | + osism apply fail2ban 2026-01-03 00:34:31.757867 | orchestrator | 2026-01-03 00:34:31 | INFO  | Task c0ab79be-b89c-46dd-819f-28b8561ee669 (fail2ban) was prepared for execution. 2026-01-03 00:34:31.757989 | orchestrator | 2026-01-03 00:34:31 | INFO  | It takes a moment until task c0ab79be-b89c-46dd-819f-28b8561ee669 (fail2ban) has been started and output is visible here. 2026-01-03 00:34:53.126321 | orchestrator | 2026-01-03 00:34:53.126438 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-03 00:34:53.126457 | orchestrator | 2026-01-03 00:34:53.126470 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-03 00:34:53.126483 | orchestrator | Saturday 03 January 2026 00:34:36 +0000 (0:00:00.275) 0:00:00.275 ****** 2026-01-03 00:34:53.126495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:34:53.126535 | orchestrator | 2026-01-03 00:34:53.126548 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-03 00:34:53.126559 | orchestrator | Saturday 03 January 2026 00:34:37 +0000 (0:00:01.082) 0:00:01.358 ****** 2026-01-03 00:34:53.126570 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:53.126582 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:53.126593 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:53.126603 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:53.126614 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:53.126625 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:53.126635 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:53.126646 | orchestrator | 2026-01-03 00:34:53.126657 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-03 00:34:53.126668 | orchestrator | Saturday 03 January 2026 00:34:48 +0000 (0:00:11.019) 0:00:12.377 ****** 2026-01-03 00:34:53.126679 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:53.126690 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:53.126700 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:53.126711 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:53.126722 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:53.126733 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:53.126743 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:53.126754 | orchestrator | 2026-01-03 00:34:53.126765 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-03 00:34:53.126776 | orchestrator | Saturday 03 January 2026 00:34:49 +0000 (0:00:01.462) 0:00:13.839 ****** 2026-01-03 00:34:53.126786 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:53.126798 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:53.126809 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:53.126820 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:53.126831 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:53.126841 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:53.126852 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:53.126865 | orchestrator | 2026-01-03 00:34:53.126878 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-03 00:34:53.126890 | orchestrator | Saturday 03 January 2026 00:34:51 +0000 (0:00:01.464) 0:00:15.304 ****** 2026-01-03 00:34:53.126902 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:53.126914 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:53.126928 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:53.126940 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:53.126953 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:53.126965 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:53.126977 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:53.126990 | orchestrator | 2026-01-03 00:34:53.127004 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:34:53.127017 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127031 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127043 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127056 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127084 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127104 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127172 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:34:53.127209 | orchestrator | 2026-01-03 00:34:53.127228 | orchestrator | 2026-01-03 00:34:53.127247 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:34:53.127265 | orchestrator | Saturday 03 January 2026 00:34:52 +0000 (0:00:01.618) 0:00:16.923 ****** 2026-01-03 00:34:53.127283 | orchestrator | =============================================================================== 2026-01-03 00:34:53.127303 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.02s 2026-01-03 00:34:53.127321 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-01-03 00:34:53.127338 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.46s 2026-01-03 00:34:53.127349 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-01-03 00:34:53.127360 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.08s 2026-01-03 00:34:53.407029 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-03 00:34:53.407192 | orchestrator | + osism apply network 2026-01-03 00:35:05.499688 | orchestrator | 2026-01-03 00:35:05 | INFO  | Task 9cffaccb-a8e5-43d4-b942-e5519835cfc8 (network) was prepared for execution. 2026-01-03 00:35:05.499803 | orchestrator | 2026-01-03 00:35:05 | INFO  | It takes a moment until task 9cffaccb-a8e5-43d4-b942-e5519835cfc8 (network) has been started and output is visible here. 2026-01-03 00:35:31.411980 | orchestrator | 2026-01-03 00:35:31.412165 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-03 00:35:31.412193 | orchestrator | 2026-01-03 00:35:31.412213 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-03 00:35:31.412232 | orchestrator | Saturday 03 January 2026 00:35:09 +0000 (0:00:00.187) 0:00:00.187 ****** 2026-01-03 00:35:31.412251 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.412272 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.412291 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.412309 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.412327 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.412346 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.412364 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.412383 | orchestrator | 2026-01-03 00:35:31.412401 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-03 00:35:31.412419 | orchestrator | Saturday 03 January 2026 00:35:09 +0000 (0:00:00.502) 0:00:00.689 ****** 2026-01-03 00:35:31.412439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:35:31.412460 | orchestrator | 2026-01-03 00:35:31.412479 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-03 00:35:31.412499 | orchestrator | Saturday 03 January 2026 00:35:10 +0000 (0:00:00.873) 0:00:01.563 ****** 2026-01-03 00:35:31.412518 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.412536 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.412555 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.412574 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.412593 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.412611 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.412630 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.412648 | orchestrator | 2026-01-03 00:35:31.412670 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-03 00:35:31.412690 | orchestrator | Saturday 03 January 2026 00:35:12 +0000 (0:00:02.037) 0:00:03.600 ****** 2026-01-03 00:35:31.412709 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.412729 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.412750 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.412769 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.412815 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.412832 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.412848 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.412859 | orchestrator | 2026-01-03 00:35:31.412870 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-03 00:35:31.412881 | orchestrator | Saturday 03 January 2026 00:35:14 +0000 (0:00:01.591) 0:00:05.191 ****** 2026-01-03 00:35:31.412893 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-03 00:35:31.412906 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-03 00:35:31.412924 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-03 00:35:31.412941 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-03 00:35:31.412959 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-03 00:35:31.412977 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-03 00:35:31.412994 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-03 00:35:31.413012 | orchestrator | 2026-01-03 00:35:31.413029 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-03 00:35:31.413047 | orchestrator | Saturday 03 January 2026 00:35:15 +0000 (0:00:00.882) 0:00:06.074 ****** 2026-01-03 00:35:31.413065 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:35:31.413121 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:35:31.413136 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:35:31.413147 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:35:31.413158 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 00:35:31.413169 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 00:35:31.413180 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 00:35:31.413190 | orchestrator | 2026-01-03 00:35:31.413202 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-03 00:35:31.413214 | orchestrator | Saturday 03 January 2026 00:35:17 +0000 (0:00:02.646) 0:00:08.720 ****** 2026-01-03 00:35:31.413225 | orchestrator | changed: [testbed-manager] 2026-01-03 00:35:31.413236 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:35:31.413246 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:35:31.413257 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:35:31.413268 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:35:31.413279 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:35:31.413290 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:35:31.413300 | orchestrator | 2026-01-03 00:35:31.413311 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-03 00:35:31.413323 | orchestrator | Saturday 03 January 2026 00:35:19 +0000 (0:00:01.413) 0:00:10.134 ****** 2026-01-03 00:35:31.413333 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:35:31.413344 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:35:31.413355 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:35:31.413366 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:35:31.413376 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 00:35:31.413387 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 00:35:31.413398 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 00:35:31.413409 | orchestrator | 2026-01-03 00:35:31.413420 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-03 00:35:31.413431 | orchestrator | Saturday 03 January 2026 00:35:20 +0000 (0:00:01.550) 0:00:11.684 ****** 2026-01-03 00:35:31.413441 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.413452 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.413463 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.413474 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.413485 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.413495 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.413506 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.413517 | orchestrator | 2026-01-03 00:35:31.413528 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-03 00:35:31.413572 | orchestrator | Saturday 03 January 2026 00:35:21 +0000 (0:00:01.011) 0:00:12.696 ****** 2026-01-03 00:35:31.413584 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:31.413595 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:31.413606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:31.413616 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:31.413627 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:31.413638 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:31.413648 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:31.413659 | orchestrator | 2026-01-03 00:35:31.413670 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-03 00:35:31.413681 | orchestrator | Saturday 03 January 2026 00:35:22 +0000 (0:00:00.548) 0:00:13.244 ****** 2026-01-03 00:35:31.413691 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.413702 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.413713 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.413723 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.413734 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.413744 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.413755 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.413765 | orchestrator | 2026-01-03 00:35:31.413776 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-03 00:35:31.413787 | orchestrator | Saturday 03 January 2026 00:35:24 +0000 (0:00:02.306) 0:00:15.551 ****** 2026-01-03 00:35:31.413798 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:31.413809 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:31.413819 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:31.413830 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:31.413840 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:31.413851 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:31.413862 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-03 00:35:31.413874 | orchestrator | 2026-01-03 00:35:31.413885 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-03 00:35:31.413896 | orchestrator | Saturday 03 January 2026 00:35:25 +0000 (0:00:00.958) 0:00:16.510 ****** 2026-01-03 00:35:31.413926 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.413937 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:35:31.413948 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:35:31.413958 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:35:31.413969 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:35:31.413980 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:35:31.413990 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:35:31.414001 | orchestrator | 2026-01-03 00:35:31.414012 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-03 00:35:31.414151 | orchestrator | Saturday 03 January 2026 00:35:27 +0000 (0:00:01.626) 0:00:18.136 ****** 2026-01-03 00:35:31.414163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:35:31.414176 | orchestrator | 2026-01-03 00:35:31.414187 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-03 00:35:31.414198 | orchestrator | Saturday 03 January 2026 00:35:28 +0000 (0:00:01.216) 0:00:19.353 ****** 2026-01-03 00:35:31.414209 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.414220 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.414231 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.414241 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.414252 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.414263 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.414274 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.414284 | orchestrator | 2026-01-03 00:35:31.414295 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-03 00:35:31.414317 | orchestrator | Saturday 03 January 2026 00:35:29 +0000 (0:00:01.107) 0:00:20.461 ****** 2026-01-03 00:35:31.414329 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:31.414339 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:31.414350 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:31.414368 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:31.414379 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:31.414390 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:31.414400 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:31.414411 | orchestrator | 2026-01-03 00:35:31.414422 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-03 00:35:31.414433 | orchestrator | Saturday 03 January 2026 00:35:30 +0000 (0:00:00.641) 0:00:21.103 ****** 2026-01-03 00:35:31.414444 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414455 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414466 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414476 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414487 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414498 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414509 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414519 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414530 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414541 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414552 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414562 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:35:31.414573 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414584 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:35:31.414595 | orchestrator | 2026-01-03 00:35:31.414614 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-03 00:35:45.886481 | orchestrator | Saturday 03 January 2026 00:35:31 +0000 (0:00:01.171) 0:00:22.274 ****** 2026-01-03 00:35:45.886579 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:45.886591 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:45.886598 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:45.886603 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:45.886610 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:45.886616 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:45.886622 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:45.886627 | orchestrator | 2026-01-03 00:35:45.886635 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-03 00:35:45.886641 | orchestrator | Saturday 03 January 2026 00:35:31 +0000 (0:00:00.599) 0:00:22.873 ****** 2026-01-03 00:35:45.886648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-node-3, testbed-manager, testbed-node-2, testbed-node-4, testbed-node-5 2026-01-03 00:35:45.886655 | orchestrator | 2026-01-03 00:35:45.886661 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-03 00:35:45.886667 | orchestrator | Saturday 03 January 2026 00:35:36 +0000 (0:00:04.204) 0:00:27.078 ****** 2026-01-03 00:35:45.886676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886683 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886761 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886816 | orchestrator | 2026-01-03 00:35:45.886822 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-03 00:35:45.886827 | orchestrator | Saturday 03 January 2026 00:35:40 +0000 (0:00:04.685) 0:00:31.764 ****** 2026-01-03 00:35:45.886838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886850 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886866 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:35:45.886892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:45.886916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:58.613979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:35:58.614188 | orchestrator | 2026-01-03 00:35:58.614204 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-03 00:35:58.614213 | orchestrator | Saturday 03 January 2026 00:35:45 +0000 (0:00:04.982) 0:00:36.746 ****** 2026-01-03 00:35:58.614221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:35:58.614228 | orchestrator | 2026-01-03 00:35:58.614235 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-03 00:35:58.614242 | orchestrator | Saturday 03 January 2026 00:35:46 +0000 (0:00:01.086) 0:00:37.832 ****** 2026-01-03 00:35:58.614249 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:58.614257 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:58.614263 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:58.614270 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:58.614277 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:58.614283 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:58.614290 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:58.614296 | orchestrator | 2026-01-03 00:35:58.614303 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-03 00:35:58.614310 | orchestrator | Saturday 03 January 2026 00:35:47 +0000 (0:00:01.004) 0:00:38.837 ****** 2026-01-03 00:35:58.614316 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614324 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614330 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614337 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614344 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.614351 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614358 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614364 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614371 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614377 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.614384 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614390 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614403 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614410 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.614417 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614434 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614441 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614448 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614455 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.614461 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614468 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614474 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614481 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614493 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614500 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614507 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614513 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614520 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.614527 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.614534 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:35:58.614542 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:35:58.614550 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:35:58.614558 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:35:58.614565 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.614574 | orchestrator | 2026-01-03 00:35:58.614582 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-03 00:35:58.614605 | orchestrator | Saturday 03 January 2026 00:35:48 +0000 (0:00:00.793) 0:00:39.630 ****** 2026-01-03 00:35:58.614613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:35:58.614621 | orchestrator | 2026-01-03 00:35:58.614629 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-03 00:35:58.614638 | orchestrator | Saturday 03 January 2026 00:35:49 +0000 (0:00:01.061) 0:00:40.692 ****** 2026-01-03 00:35:58.614650 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.614662 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.614674 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.614686 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.614696 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.614707 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.614719 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.614731 | orchestrator | 2026-01-03 00:35:58.614742 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-03 00:35:58.614754 | orchestrator | Saturday 03 January 2026 00:35:50 +0000 (0:00:00.569) 0:00:41.262 ****** 2026-01-03 00:35:58.614761 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.614767 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.614774 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.614780 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.614787 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.614793 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.614800 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.614807 | orchestrator | 2026-01-03 00:35:58.614813 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-03 00:35:58.614820 | orchestrator | Saturday 03 January 2026 00:35:51 +0000 (0:00:00.657) 0:00:41.919 ****** 2026-01-03 00:35:58.614826 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.614833 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.614839 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.614846 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.614852 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.614859 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.614865 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.614872 | orchestrator | 2026-01-03 00:35:58.614878 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-03 00:35:58.614885 | orchestrator | Saturday 03 January 2026 00:35:51 +0000 (0:00:00.552) 0:00:42.472 ****** 2026-01-03 00:35:58.614899 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.614906 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.614912 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.614919 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.614925 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.614932 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.614938 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.614945 | orchestrator | 2026-01-03 00:35:58.614951 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-03 00:35:58.614958 | orchestrator | Saturday 03 January 2026 00:35:52 +0000 (0:00:00.664) 0:00:43.137 ****** 2026-01-03 00:35:58.614965 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:58.614971 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:58.614978 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:58.614984 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:58.614991 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:58.614997 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:58.615004 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:58.615010 | orchestrator | 2026-01-03 00:35:58.615017 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-03 00:35:58.615028 | orchestrator | Saturday 03 January 2026 00:35:53 +0000 (0:00:01.558) 0:00:44.695 ****** 2026-01-03 00:35:58.615035 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:58.615041 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:58.615069 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:58.615080 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:58.615089 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:58.615099 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:58.615110 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:58.615120 | orchestrator | 2026-01-03 00:35:58.615131 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-03 00:35:58.615143 | orchestrator | Saturday 03 January 2026 00:35:55 +0000 (0:00:01.246) 0:00:45.942 ****** 2026-01-03 00:35:58.615154 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:58.615165 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:58.615176 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:58.615186 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:58.615198 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:58.615209 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:58.615221 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:58.615231 | orchestrator | 2026-01-03 00:35:58.615243 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-03 00:35:58.615250 | orchestrator | Saturday 03 January 2026 00:35:57 +0000 (0:00:02.239) 0:00:48.181 ****** 2026-01-03 00:35:58.615257 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.615264 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.615270 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.615277 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.615283 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.615290 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.615296 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.615303 | orchestrator | 2026-01-03 00:35:58.615309 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-03 00:35:58.615316 | orchestrator | Saturday 03 January 2026 00:35:57 +0000 (0:00:00.612) 0:00:48.794 ****** 2026-01-03 00:35:58.615322 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:35:58.615329 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:35:58.615335 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:35:58.615342 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:35:58.615348 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:35:58.615355 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:35:58.615361 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:35:58.615368 | orchestrator | 2026-01-03 00:35:58.615374 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:35:58.949473 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-03 00:35:58.949574 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949589 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949600 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949612 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949630 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949650 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:35:58.949671 | orchestrator | 2026-01-03 00:35:58.949692 | orchestrator | 2026-01-03 00:35:58.949709 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:35:58.949721 | orchestrator | Saturday 03 January 2026 00:35:58 +0000 (0:00:00.688) 0:00:49.482 ****** 2026-01-03 00:35:58.949732 | orchestrator | =============================================================================== 2026-01-03 00:35:58.949742 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.98s 2026-01-03 00:35:58.949761 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.69s 2026-01-03 00:35:58.949780 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.20s 2026-01-03 00:35:58.949797 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.65s 2026-01-03 00:35:58.949815 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2026-01-03 00:35:58.949833 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.24s 2026-01-03 00:35:58.949852 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2026-01-03 00:35:58.949872 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-01-03 00:35:58.949886 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.59s 2026-01-03 00:35:58.949897 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.56s 2026-01-03 00:35:58.949908 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.55s 2026-01-03 00:35:58.949918 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.41s 2026-01-03 00:35:58.949929 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.25s 2026-01-03 00:35:58.949940 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-01-03 00:35:58.949969 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2026-01-03 00:35:58.949981 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-01-03 00:35:58.949992 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2026-01-03 00:35:58.950003 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.06s 2026-01-03 00:35:58.950137 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.01s 2026-01-03 00:35:58.950157 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2026-01-03 00:35:59.246877 | orchestrator | + osism apply wireguard 2026-01-03 00:36:11.277668 | orchestrator | 2026-01-03 00:36:11 | INFO  | Task 7c8eac63-1147-4964-a509-ee2922e9c5e5 (wireguard) was prepared for execution. 2026-01-03 00:36:11.277813 | orchestrator | 2026-01-03 00:36:11 | INFO  | It takes a moment until task 7c8eac63-1147-4964-a509-ee2922e9c5e5 (wireguard) has been started and output is visible here. 2026-01-03 00:36:29.437285 | orchestrator | 2026-01-03 00:36:29.437403 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-03 00:36:29.437420 | orchestrator | 2026-01-03 00:36:29.437433 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-03 00:36:29.437444 | orchestrator | Saturday 03 January 2026 00:36:15 +0000 (0:00:00.160) 0:00:00.161 ****** 2026-01-03 00:36:29.437456 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:29.437467 | orchestrator | 2026-01-03 00:36:29.437478 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-03 00:36:29.437489 | orchestrator | Saturday 03 January 2026 00:36:16 +0000 (0:00:01.146) 0:00:01.307 ****** 2026-01-03 00:36:29.437500 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437512 | orchestrator | 2026-01-03 00:36:29.437523 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-03 00:36:29.437534 | orchestrator | Saturday 03 January 2026 00:36:22 +0000 (0:00:05.750) 0:00:07.057 ****** 2026-01-03 00:36:29.437550 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437566 | orchestrator | 2026-01-03 00:36:29.437578 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-03 00:36:29.437589 | orchestrator | Saturday 03 January 2026 00:36:22 +0000 (0:00:00.522) 0:00:07.580 ****** 2026-01-03 00:36:29.437600 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437611 | orchestrator | 2026-01-03 00:36:29.437621 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-03 00:36:29.437633 | orchestrator | Saturday 03 January 2026 00:36:22 +0000 (0:00:00.417) 0:00:07.997 ****** 2026-01-03 00:36:29.437643 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:29.437655 | orchestrator | 2026-01-03 00:36:29.437666 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-03 00:36:29.437677 | orchestrator | Saturday 03 January 2026 00:36:23 +0000 (0:00:00.636) 0:00:08.634 ****** 2026-01-03 00:36:29.437688 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:29.437699 | orchestrator | 2026-01-03 00:36:29.437710 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-03 00:36:29.437722 | orchestrator | Saturday 03 January 2026 00:36:24 +0000 (0:00:00.414) 0:00:09.048 ****** 2026-01-03 00:36:29.437733 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:29.437743 | orchestrator | 2026-01-03 00:36:29.437754 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-03 00:36:29.437765 | orchestrator | Saturday 03 January 2026 00:36:24 +0000 (0:00:00.407) 0:00:09.456 ****** 2026-01-03 00:36:29.437776 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437787 | orchestrator | 2026-01-03 00:36:29.437800 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-03 00:36:29.437813 | orchestrator | Saturday 03 January 2026 00:36:25 +0000 (0:00:01.185) 0:00:10.641 ****** 2026-01-03 00:36:29.437826 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:36:29.437839 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437852 | orchestrator | 2026-01-03 00:36:29.437865 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-03 00:36:29.437883 | orchestrator | Saturday 03 January 2026 00:36:26 +0000 (0:00:00.906) 0:00:11.548 ****** 2026-01-03 00:36:29.437902 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437921 | orchestrator | 2026-01-03 00:36:29.437938 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-03 00:36:29.437957 | orchestrator | Saturday 03 January 2026 00:36:28 +0000 (0:00:01.644) 0:00:13.192 ****** 2026-01-03 00:36:29.437975 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:29.437996 | orchestrator | 2026-01-03 00:36:29.438112 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:36:29.438131 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:36:29.438172 | orchestrator | 2026-01-03 00:36:29.438184 | orchestrator | 2026-01-03 00:36:29.438195 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:36:29.438206 | orchestrator | Saturday 03 January 2026 00:36:29 +0000 (0:00:00.912) 0:00:14.104 ****** 2026-01-03 00:36:29.438216 | orchestrator | =============================================================================== 2026-01-03 00:36:29.438227 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.75s 2026-01-03 00:36:29.438238 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2026-01-03 00:36:29.438249 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2026-01-03 00:36:29.438260 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.15s 2026-01-03 00:36:29.438275 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-01-03 00:36:29.438300 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-01-03 00:36:29.438325 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.64s 2026-01-03 00:36:29.438344 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2026-01-03 00:36:29.438362 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-01-03 00:36:29.438381 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-01-03 00:36:29.438399 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-01-03 00:36:29.715380 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-03 00:36:29.755769 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-03 00:36:29.755880 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-03 00:36:29.834566 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 189 0 --:--:-- --:--:-- --:--:-- 192 2026-01-03 00:36:29.849141 | orchestrator | + osism apply --environment custom workarounds 2026-01-03 00:36:31.735486 | orchestrator | 2026-01-03 00:36:31 | INFO  | Trying to run play workarounds in environment custom 2026-01-03 00:36:41.892119 | orchestrator | 2026-01-03 00:36:41 | INFO  | Task 32fd56ed-5af5-4f9f-abfe-f01683a04271 (workarounds) was prepared for execution. 2026-01-03 00:36:41.892231 | orchestrator | 2026-01-03 00:36:41 | INFO  | It takes a moment until task 32fd56ed-5af5-4f9f-abfe-f01683a04271 (workarounds) has been started and output is visible here. 2026-01-03 00:37:05.967519 | orchestrator | 2026-01-03 00:37:05.967632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:37:05.967648 | orchestrator | 2026-01-03 00:37:05.967661 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-03 00:37:05.967673 | orchestrator | Saturday 03 January 2026 00:36:45 +0000 (0:00:00.091) 0:00:00.091 ****** 2026-01-03 00:37:05.967685 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967698 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967709 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967720 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967731 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967743 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967754 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-03 00:37:05.967765 | orchestrator | 2026-01-03 00:37:05.967776 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-03 00:37:05.967810 | orchestrator | 2026-01-03 00:37:05.967822 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-03 00:37:05.967833 | orchestrator | Saturday 03 January 2026 00:36:46 +0000 (0:00:00.588) 0:00:00.680 ****** 2026-01-03 00:37:05.967845 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:05.967857 | orchestrator | 2026-01-03 00:37:05.967868 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-03 00:37:05.967879 | orchestrator | 2026-01-03 00:37:05.967890 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-03 00:37:05.967901 | orchestrator | Saturday 03 January 2026 00:36:48 +0000 (0:00:02.057) 0:00:02.737 ****** 2026-01-03 00:37:05.967912 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:05.967924 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:05.967935 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:05.967946 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:05.967957 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:05.967968 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:05.967979 | orchestrator | 2026-01-03 00:37:05.968023 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-03 00:37:05.968035 | orchestrator | 2026-01-03 00:37:05.968046 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-03 00:37:05.968059 | orchestrator | Saturday 03 January 2026 00:36:50 +0000 (0:00:01.828) 0:00:04.566 ****** 2026-01-03 00:37:05.968072 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968087 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968099 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968112 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968125 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968138 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:05.968150 | orchestrator | 2026-01-03 00:37:05.968163 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-03 00:37:05.968175 | orchestrator | Saturday 03 January 2026 00:36:51 +0000 (0:00:01.406) 0:00:05.972 ****** 2026-01-03 00:37:05.968188 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:05.968201 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:05.968213 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:05.968226 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:05.968238 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:05.968251 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:05.968263 | orchestrator | 2026-01-03 00:37:05.968289 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-03 00:37:05.968303 | orchestrator | Saturday 03 January 2026 00:36:55 +0000 (0:00:03.878) 0:00:09.851 ****** 2026-01-03 00:37:05.968316 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:05.968327 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:05.968338 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:05.968348 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:05.968359 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:05.968370 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:05.968381 | orchestrator | 2026-01-03 00:37:05.968392 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-03 00:37:05.968402 | orchestrator | 2026-01-03 00:37:05.968413 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-03 00:37:05.968424 | orchestrator | Saturday 03 January 2026 00:36:56 +0000 (0:00:00.653) 0:00:10.505 ****** 2026-01-03 00:37:05.968435 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:05.968445 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:05.968464 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:05.968475 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:05.968486 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:05.968496 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:05.968507 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:05.968518 | orchestrator | 2026-01-03 00:37:05.968529 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-03 00:37:05.968540 | orchestrator | Saturday 03 January 2026 00:36:57 +0000 (0:00:01.783) 0:00:12.288 ****** 2026-01-03 00:37:05.968550 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:05.968561 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:05.968572 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:05.968583 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:05.968593 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:05.968604 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:05.968631 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:05.968642 | orchestrator | 2026-01-03 00:37:05.968654 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-03 00:37:05.968665 | orchestrator | Saturday 03 January 2026 00:36:59 +0000 (0:00:01.451) 0:00:13.739 ****** 2026-01-03 00:37:05.968675 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:05.968686 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:05.968697 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:05.968708 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:05.968719 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:05.968730 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:05.968740 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:05.968757 | orchestrator | 2026-01-03 00:37:05.968776 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-03 00:37:05.968795 | orchestrator | Saturday 03 January 2026 00:37:00 +0000 (0:00:01.491) 0:00:15.231 ****** 2026-01-03 00:37:05.968813 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:05.968830 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:05.968847 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:05.968866 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:05.968883 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:05.968900 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:05.968917 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:05.968935 | orchestrator | 2026-01-03 00:37:05.968952 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-03 00:37:05.968969 | orchestrator | Saturday 03 January 2026 00:37:02 +0000 (0:00:01.727) 0:00:16.958 ****** 2026-01-03 00:37:05.969009 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:05.969028 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:05.969045 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:05.969062 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:05.969079 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:05.969097 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:05.969112 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:37:05.969127 | orchestrator | 2026-01-03 00:37:05.969145 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-03 00:37:05.969163 | orchestrator | 2026-01-03 00:37:05.969182 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-03 00:37:05.969200 | orchestrator | Saturday 03 January 2026 00:37:03 +0000 (0:00:00.596) 0:00:17.554 ****** 2026-01-03 00:37:05.969218 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:05.969236 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:05.969256 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:05.969274 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:05.969293 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:05.969311 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:05.969328 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:05.969347 | orchestrator | 2026-01-03 00:37:05.969366 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:37:05.969402 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:37:05.969422 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969438 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969450 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969461 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969472 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969491 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:05.969503 | orchestrator | 2026-01-03 00:37:05.969514 | orchestrator | 2026-01-03 00:37:05.969525 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:37:05.969535 | orchestrator | Saturday 03 January 2026 00:37:05 +0000 (0:00:02.685) 0:00:20.240 ****** 2026-01-03 00:37:05.969546 | orchestrator | =============================================================================== 2026-01-03 00:37:05.969557 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.88s 2026-01-03 00:37:05.969568 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2026-01-03 00:37:05.969579 | orchestrator | Apply netplan configuration --------------------------------------------- 2.06s 2026-01-03 00:37:05.969590 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-01-03 00:37:05.969600 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.78s 2026-01-03 00:37:05.969611 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.73s 2026-01-03 00:37:05.969622 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2026-01-03 00:37:05.969632 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.45s 2026-01-03 00:37:05.969643 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2026-01-03 00:37:05.969654 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-01-03 00:37:05.969665 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-01-03 00:37:05.969688 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2026-01-03 00:37:06.529440 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-03 00:37:18.560629 | orchestrator | 2026-01-03 00:37:18 | INFO  | Task 5a10640f-3f13-4008-a5f1-3c716afbb7d5 (reboot) was prepared for execution. 2026-01-03 00:37:18.560697 | orchestrator | 2026-01-03 00:37:18 | INFO  | It takes a moment until task 5a10640f-3f13-4008-a5f1-3c716afbb7d5 (reboot) has been started and output is visible here. 2026-01-03 00:37:28.664528 | orchestrator | 2026-01-03 00:37:28.664642 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.664659 | orchestrator | 2026-01-03 00:37:28.664670 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.664682 | orchestrator | Saturday 03 January 2026 00:37:22 +0000 (0:00:00.193) 0:00:00.193 ****** 2026-01-03 00:37:28.664693 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:28.664705 | orchestrator | 2026-01-03 00:37:28.664716 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.664751 | orchestrator | Saturday 03 January 2026 00:37:22 +0000 (0:00:00.108) 0:00:00.301 ****** 2026-01-03 00:37:28.664763 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:28.664774 | orchestrator | 2026-01-03 00:37:28.664785 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.664827 | orchestrator | Saturday 03 January 2026 00:37:23 +0000 (0:00:01.014) 0:00:01.316 ****** 2026-01-03 00:37:28.664838 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:28.664849 | orchestrator | 2026-01-03 00:37:28.664860 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.664871 | orchestrator | 2026-01-03 00:37:28.664882 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.664892 | orchestrator | Saturday 03 January 2026 00:37:23 +0000 (0:00:00.094) 0:00:01.411 ****** 2026-01-03 00:37:28.664903 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:28.664914 | orchestrator | 2026-01-03 00:37:28.664925 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.664935 | orchestrator | Saturday 03 January 2026 00:37:24 +0000 (0:00:00.101) 0:00:01.512 ****** 2026-01-03 00:37:28.664946 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:28.664957 | orchestrator | 2026-01-03 00:37:28.665002 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.665013 | orchestrator | Saturday 03 January 2026 00:37:24 +0000 (0:00:00.629) 0:00:02.142 ****** 2026-01-03 00:37:28.665024 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:28.665035 | orchestrator | 2026-01-03 00:37:28.665048 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.665067 | orchestrator | 2026-01-03 00:37:28.665086 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.665106 | orchestrator | Saturday 03 January 2026 00:37:24 +0000 (0:00:00.112) 0:00:02.254 ****** 2026-01-03 00:37:28.665125 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:28.665145 | orchestrator | 2026-01-03 00:37:28.665164 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.665183 | orchestrator | Saturday 03 January 2026 00:37:24 +0000 (0:00:00.188) 0:00:02.443 ****** 2026-01-03 00:37:28.665201 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:28.665221 | orchestrator | 2026-01-03 00:37:28.665241 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.665261 | orchestrator | Saturday 03 January 2026 00:37:25 +0000 (0:00:00.682) 0:00:03.125 ****** 2026-01-03 00:37:28.665279 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:28.665297 | orchestrator | 2026-01-03 00:37:28.665317 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.665337 | orchestrator | 2026-01-03 00:37:28.665352 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.665365 | orchestrator | Saturday 03 January 2026 00:37:25 +0000 (0:00:00.108) 0:00:03.233 ****** 2026-01-03 00:37:28.665378 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:28.665393 | orchestrator | 2026-01-03 00:37:28.665405 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.665434 | orchestrator | Saturday 03 January 2026 00:37:25 +0000 (0:00:00.096) 0:00:03.330 ****** 2026-01-03 00:37:28.665448 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:28.665459 | orchestrator | 2026-01-03 00:37:28.665470 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.665481 | orchestrator | Saturday 03 January 2026 00:37:26 +0000 (0:00:00.679) 0:00:04.009 ****** 2026-01-03 00:37:28.665492 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:28.665503 | orchestrator | 2026-01-03 00:37:28.665514 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.665525 | orchestrator | 2026-01-03 00:37:28.665536 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.665558 | orchestrator | Saturday 03 January 2026 00:37:26 +0000 (0:00:00.105) 0:00:04.115 ****** 2026-01-03 00:37:28.665569 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:28.665580 | orchestrator | 2026-01-03 00:37:28.665591 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.665602 | orchestrator | Saturday 03 January 2026 00:37:26 +0000 (0:00:00.111) 0:00:04.226 ****** 2026-01-03 00:37:28.665618 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:28.665637 | orchestrator | 2026-01-03 00:37:28.665649 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.665660 | orchestrator | Saturday 03 January 2026 00:37:27 +0000 (0:00:00.682) 0:00:04.909 ****** 2026-01-03 00:37:28.665671 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:28.665682 | orchestrator | 2026-01-03 00:37:28.665693 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:37:28.665703 | orchestrator | 2026-01-03 00:37:28.665714 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:37:28.665725 | orchestrator | Saturday 03 January 2026 00:37:27 +0000 (0:00:00.116) 0:00:05.025 ****** 2026-01-03 00:37:28.665736 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:28.665747 | orchestrator | 2026-01-03 00:37:28.665758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:37:28.665769 | orchestrator | Saturday 03 January 2026 00:37:27 +0000 (0:00:00.110) 0:00:05.136 ****** 2026-01-03 00:37:28.665780 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:28.665791 | orchestrator | 2026-01-03 00:37:28.665808 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:37:28.665827 | orchestrator | Saturday 03 January 2026 00:37:28 +0000 (0:00:00.672) 0:00:05.809 ****** 2026-01-03 00:37:28.665871 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:28.665891 | orchestrator | 2026-01-03 00:37:28.665909 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:37:28.665929 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.665952 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.666166 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.666184 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.666195 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.666206 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:28.666217 | orchestrator | 2026-01-03 00:37:28.666230 | orchestrator | 2026-01-03 00:37:28.666249 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:37:28.666269 | orchestrator | Saturday 03 January 2026 00:37:28 +0000 (0:00:00.042) 0:00:05.852 ****** 2026-01-03 00:37:28.666308 | orchestrator | =============================================================================== 2026-01-03 00:37:28.666327 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.36s 2026-01-03 00:37:28.666348 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2026-01-03 00:37:28.666367 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2026-01-03 00:37:28.928256 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-03 00:37:40.973445 | orchestrator | 2026-01-03 00:37:40 | INFO  | Task 3b1310ca-56c8-4e1f-862b-c5a821003144 (wait-for-connection) was prepared for execution. 2026-01-03 00:37:40.973539 | orchestrator | 2026-01-03 00:37:40 | INFO  | It takes a moment until task 3b1310ca-56c8-4e1f-862b-c5a821003144 (wait-for-connection) has been started and output is visible here. 2026-01-03 00:37:56.885037 | orchestrator | 2026-01-03 00:37:56.885151 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-03 00:37:56.885168 | orchestrator | 2026-01-03 00:37:56.885180 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-03 00:37:56.885193 | orchestrator | Saturday 03 January 2026 00:37:45 +0000 (0:00:00.168) 0:00:00.168 ****** 2026-01-03 00:37:56.885205 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:56.885217 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:56.885228 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:56.885239 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:56.885249 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:56.885260 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:56.885270 | orchestrator | 2026-01-03 00:37:56.885300 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:37:56.885312 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885324 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885335 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885346 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885357 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885368 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:56.885379 | orchestrator | 2026-01-03 00:37:56.885390 | orchestrator | 2026-01-03 00:37:56.885401 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:37:56.885411 | orchestrator | Saturday 03 January 2026 00:37:56 +0000 (0:00:11.578) 0:00:11.747 ****** 2026-01-03 00:37:56.885422 | orchestrator | =============================================================================== 2026-01-03 00:37:56.885433 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-01-03 00:37:57.156082 | orchestrator | + osism apply hddtemp 2026-01-03 00:38:09.231658 | orchestrator | 2026-01-03 00:38:09 | INFO  | Task e0a8f9f4-6cfd-4841-a0a1-17729b0b6517 (hddtemp) was prepared for execution. 2026-01-03 00:38:09.231768 | orchestrator | 2026-01-03 00:38:09 | INFO  | It takes a moment until task e0a8f9f4-6cfd-4841-a0a1-17729b0b6517 (hddtemp) has been started and output is visible here. 2026-01-03 00:38:38.012107 | orchestrator | 2026-01-03 00:38:38.012219 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-03 00:38:38.012237 | orchestrator | 2026-01-03 00:38:38.012251 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-03 00:38:38.012264 | orchestrator | Saturday 03 January 2026 00:38:13 +0000 (0:00:00.247) 0:00:00.247 ****** 2026-01-03 00:38:38.012276 | orchestrator | ok: [testbed-manager] 2026-01-03 00:38:38.012289 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:38:38.012300 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:38:38.012313 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:38:38.012324 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:38:38.012336 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:38:38.012347 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:38:38.012358 | orchestrator | 2026-01-03 00:38:38.012370 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-03 00:38:38.012408 | orchestrator | Saturday 03 January 2026 00:38:14 +0000 (0:00:00.671) 0:00:00.919 ****** 2026-01-03 00:38:38.012421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:38:38.012435 | orchestrator | 2026-01-03 00:38:38.012447 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-03 00:38:38.012458 | orchestrator | Saturday 03 January 2026 00:38:15 +0000 (0:00:01.143) 0:00:02.062 ****** 2026-01-03 00:38:38.012470 | orchestrator | ok: [testbed-manager] 2026-01-03 00:38:38.012481 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:38:38.012493 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:38:38.012504 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:38:38.012515 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:38:38.012526 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:38:38.012538 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:38:38.012549 | orchestrator | 2026-01-03 00:38:38.012561 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-03 00:38:38.012572 | orchestrator | Saturday 03 January 2026 00:38:17 +0000 (0:00:02.204) 0:00:04.266 ****** 2026-01-03 00:38:38.012584 | orchestrator | changed: [testbed-manager] 2026-01-03 00:38:38.012596 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:38:38.012607 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:38:38.012618 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:38:38.012630 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:38:38.012644 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:38:38.012657 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:38:38.012670 | orchestrator | 2026-01-03 00:38:38.012683 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-03 00:38:38.012697 | orchestrator | Saturday 03 January 2026 00:38:18 +0000 (0:00:01.125) 0:00:05.392 ****** 2026-01-03 00:38:38.012710 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:38:38.012723 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:38:38.012737 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:38:38.012750 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:38:38.012761 | orchestrator | ok: [testbed-manager] 2026-01-03 00:38:38.012773 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:38:38.012784 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:38:38.012795 | orchestrator | 2026-01-03 00:38:38.012807 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-03 00:38:38.012819 | orchestrator | Saturday 03 January 2026 00:38:19 +0000 (0:00:01.114) 0:00:06.506 ****** 2026-01-03 00:38:38.012830 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:38:38.012841 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:38:38.012852 | orchestrator | changed: [testbed-manager] 2026-01-03 00:38:38.012864 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:38:38.012875 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:38:38.012886 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:38:38.012922 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:38:38.012933 | orchestrator | 2026-01-03 00:38:38.012958 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-03 00:38:38.012970 | orchestrator | Saturday 03 January 2026 00:38:20 +0000 (0:00:00.804) 0:00:07.311 ****** 2026-01-03 00:38:38.012980 | orchestrator | changed: [testbed-manager] 2026-01-03 00:38:38.012991 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:38:38.013002 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:38:38.013013 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:38:38.013023 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:38:38.013034 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:38:38.013044 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:38:38.013055 | orchestrator | 2026-01-03 00:38:38.013066 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-03 00:38:38.013077 | orchestrator | Saturday 03 January 2026 00:38:34 +0000 (0:00:14.214) 0:00:21.526 ****** 2026-01-03 00:38:38.013096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:38:38.013108 | orchestrator | 2026-01-03 00:38:38.013119 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-03 00:38:38.013129 | orchestrator | Saturday 03 January 2026 00:38:35 +0000 (0:00:01.160) 0:00:22.686 ****** 2026-01-03 00:38:38.013140 | orchestrator | changed: [testbed-manager] 2026-01-03 00:38:38.013151 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:38:38.013161 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:38:38.013172 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:38:38.013183 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:38:38.013193 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:38:38.013204 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:38:38.013214 | orchestrator | 2026-01-03 00:38:38.013225 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:38:38.013236 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:38.013267 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013279 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013291 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013302 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013312 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013323 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:38:38.013334 | orchestrator | 2026-01-03 00:38:38.013344 | orchestrator | 2026-01-03 00:38:38.013355 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:38:38.013366 | orchestrator | Saturday 03 January 2026 00:38:37 +0000 (0:00:01.883) 0:00:24.570 ****** 2026-01-03 00:38:38.013377 | orchestrator | =============================================================================== 2026-01-03 00:38:38.013388 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.21s 2026-01-03 00:38:38.013399 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.20s 2026-01-03 00:38:38.013409 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2026-01-03 00:38:38.013420 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-01-03 00:38:38.013431 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-01-03 00:38:38.013441 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2026-01-03 00:38:38.013452 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.11s 2026-01-03 00:38:38.013463 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2026-01-03 00:38:38.013473 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2026-01-03 00:38:38.296321 | orchestrator | ++ semver latest 7.1.1 2026-01-03 00:38:38.357847 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:38:38.357973 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:38:38.357990 | orchestrator | + sudo systemctl restart manager.service 2026-01-03 00:38:51.587225 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-03 00:38:51.587342 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-03 00:38:51.587360 | orchestrator | + local max_attempts=60 2026-01-03 00:38:51.587373 | orchestrator | + local name=ceph-ansible 2026-01-03 00:38:51.587384 | orchestrator | + local attempt_num=1 2026-01-03 00:38:51.587395 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:38:51.616987 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:38:51.617073 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:38:51.617086 | orchestrator | + sleep 5 2026-01-03 00:38:56.622171 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:38:56.653183 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:38:56.653271 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:38:56.653284 | orchestrator | + sleep 5 2026-01-03 00:39:01.656422 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:01.689088 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:01.689192 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:01.689207 | orchestrator | + sleep 5 2026-01-03 00:39:06.692583 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:06.732142 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:06.732226 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:06.732238 | orchestrator | + sleep 5 2026-01-03 00:39:11.737178 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:11.772404 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:11.772519 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:11.772534 | orchestrator | + sleep 5 2026-01-03 00:39:16.777794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:16.821334 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:16.821433 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:16.821448 | orchestrator | + sleep 5 2026-01-03 00:39:21.825809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:21.860031 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:21.860128 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:21.860143 | orchestrator | + sleep 5 2026-01-03 00:39:26.864510 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:26.896660 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:26.896737 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:26.896751 | orchestrator | + sleep 5 2026-01-03 00:39:31.899452 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:31.943199 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:31.943290 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:31.943304 | orchestrator | + sleep 5 2026-01-03 00:39:36.946492 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:36.985276 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:36.985377 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:36.985393 | orchestrator | + sleep 5 2026-01-03 00:39:41.989551 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:42.032888 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:42.032980 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:42.032989 | orchestrator | + sleep 5 2026-01-03 00:39:47.037337 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:47.082260 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:47.082349 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:47.082363 | orchestrator | + sleep 5 2026-01-03 00:39:52.087265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:52.125366 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:52.125459 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:52.125473 | orchestrator | + sleep 5 2026-01-03 00:39:57.130537 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:57.171303 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:57.171384 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-03 00:39:57.171395 | orchestrator | + local max_attempts=60 2026-01-03 00:39:57.171422 | orchestrator | + local name=kolla-ansible 2026-01-03 00:39:57.171431 | orchestrator | + local attempt_num=1 2026-01-03 00:39:57.172127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-03 00:39:57.205593 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:57.205677 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-03 00:39:57.205689 | orchestrator | + local max_attempts=60 2026-01-03 00:39:57.205701 | orchestrator | + local name=osism-ansible 2026-01-03 00:39:57.205712 | orchestrator | + local attempt_num=1 2026-01-03 00:39:57.206390 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-03 00:39:57.239206 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:57.239292 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-03 00:39:57.239306 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-03 00:39:57.392338 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-03 00:39:57.529057 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-03 00:39:57.654773 | orchestrator | ARA in osism-ansible already disabled. 2026-01-03 00:39:57.806240 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-03 00:39:57.806543 | orchestrator | + osism apply gather-facts 2026-01-03 00:40:09.945304 | orchestrator | 2026-01-03 00:40:09 | INFO  | Task 431fca4c-ceb2-4088-b5ce-55423abc5bc2 (gather-facts) was prepared for execution. 2026-01-03 00:40:09.945371 | orchestrator | 2026-01-03 00:40:09 | INFO  | It takes a moment until task 431fca4c-ceb2-4088-b5ce-55423abc5bc2 (gather-facts) has been started and output is visible here. 2026-01-03 00:40:22.651188 | orchestrator | 2026-01-03 00:40:22.651305 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:40:22.651323 | orchestrator | 2026-01-03 00:40:22.651336 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:40:22.651348 | orchestrator | Saturday 03 January 2026 00:40:13 +0000 (0:00:00.190) 0:00:00.190 ****** 2026-01-03 00:40:22.651360 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:40:22.651372 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:40:22.651383 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:40:22.651394 | orchestrator | ok: [testbed-manager] 2026-01-03 00:40:22.651405 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:40:22.651416 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:40:22.651426 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:40:22.651437 | orchestrator | 2026-01-03 00:40:22.651448 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:40:22.651467 | orchestrator | 2026-01-03 00:40:22.651487 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:40:22.651506 | orchestrator | Saturday 03 January 2026 00:40:21 +0000 (0:00:08.683) 0:00:08.873 ****** 2026-01-03 00:40:22.651641 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:40:22.651656 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:40:22.651667 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:40:22.651679 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:40:22.651690 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:40:22.651701 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:40:22.651711 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:40:22.651722 | orchestrator | 2026-01-03 00:40:22.651735 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:40:22.651749 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651764 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651776 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651789 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651878 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651895 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651908 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:40:22.651919 | orchestrator | 2026-01-03 00:40:22.651930 | orchestrator | 2026-01-03 00:40:22.651941 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:40:22.651952 | orchestrator | Saturday 03 January 2026 00:40:22 +0000 (0:00:00.434) 0:00:09.308 ****** 2026-01-03 00:40:22.651963 | orchestrator | =============================================================================== 2026-01-03 00:40:22.651994 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.68s 2026-01-03 00:40:22.652013 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-01-03 00:40:22.871747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-03 00:40:22.883881 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-03 00:40:22.891696 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-03 00:40:22.907985 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-03 00:40:22.916656 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-03 00:40:22.934167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-03 00:40:22.943963 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-03 00:40:22.961082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-03 00:40:22.970160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-03 00:40:22.983777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-03 00:40:22.993177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-03 00:40:23.008438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-03 00:40:23.017636 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-03 00:40:23.030486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-03 00:40:23.043905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-03 00:40:23.054524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-03 00:40:23.066631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-03 00:40:23.082662 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-03 00:40:23.095545 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-03 00:40:23.112181 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-03 00:40:23.128242 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-03 00:40:23.249781 | orchestrator | ok: Runtime: 0:24:16.824943 2026-01-03 00:40:23.372246 | 2026-01-03 00:40:23.372391 | TASK [Deploy services] 2026-01-03 00:40:23.908897 | orchestrator | skipping: Conditional result was False 2026-01-03 00:40:23.931947 | 2026-01-03 00:40:23.932239 | TASK [Deploy in a nutshell] 2026-01-03 00:40:24.671752 | orchestrator | + set -e 2026-01-03 00:40:24.672092 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:40:24.672124 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:40:24.672145 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:40:24.672159 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:40:24.672171 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:40:24.672185 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:40:24.672232 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:40:24.672261 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:40:24.672276 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:40:24.672292 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:40:24.672304 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:40:24.672323 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:40:24.672334 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:40:24.672354 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:40:24.672365 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-03 00:40:24.672380 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-03 00:40:24.672391 | orchestrator | ++ export ARA=false 2026-01-03 00:40:24.672403 | orchestrator | ++ ARA=false 2026-01-03 00:40:24.672414 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:40:24.672426 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:40:24.672452 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:40:24.672463 | orchestrator | ++ TEMPEST=true 2026-01-03 00:40:24.672474 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:40:24.672485 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:40:24.672496 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:40:24.672508 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.133 2026-01-03 00:40:24.672518 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:40:24.672529 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:40:24.672539 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:40:24.672550 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:40:24.672561 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:40:24.672571 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:40:24.672583 | orchestrator | 2026-01-03 00:40:24.672594 | orchestrator | # PULL IMAGES 2026-01-03 00:40:24.672605 | orchestrator | 2026-01-03 00:40:24.672616 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:40:24.672635 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:40:24.672648 | orchestrator | + echo 2026-01-03 00:40:24.672667 | orchestrator | + echo '# PULL IMAGES' 2026-01-03 00:40:24.672696 | orchestrator | + echo 2026-01-03 00:40:24.673092 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:40:24.727100 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:40:24.727199 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:40:24.727216 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-03 00:40:26.422973 | orchestrator | 2026-01-03 00:40:26 | INFO  | Trying to run play pull-images in environment custom 2026-01-03 00:40:36.506537 | orchestrator | 2026-01-03 00:40:36 | INFO  | Task 8bfe1cbe-8fa2-4b26-8398-6a3c41d4b139 (pull-images) was prepared for execution. 2026-01-03 00:40:36.506664 | orchestrator | 2026-01-03 00:40:36 | INFO  | Task 8bfe1cbe-8fa2-4b26-8398-6a3c41d4b139 is running in background. No more output. Check ARA for logs. 2026-01-03 00:40:38.795333 | orchestrator | 2026-01-03 00:40:38 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-03 00:40:49.020486 | orchestrator | 2026-01-03 00:40:49 | INFO  | Task 10b27d84-842d-4104-86ff-170db80acfb7 (wipe-partitions) was prepared for execution. 2026-01-03 00:40:49.020577 | orchestrator | 2026-01-03 00:40:49 | INFO  | It takes a moment until task 10b27d84-842d-4104-86ff-170db80acfb7 (wipe-partitions) has been started and output is visible here. 2026-01-03 00:41:01.930453 | orchestrator | 2026-01-03 00:41:01.930568 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-03 00:41:01.930587 | orchestrator | 2026-01-03 00:41:01.930599 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-03 00:41:01.930619 | orchestrator | Saturday 03 January 2026 00:40:53 +0000 (0:00:00.144) 0:00:00.144 ****** 2026-01-03 00:41:01.930633 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:01.930645 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:01.930660 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:01.930681 | orchestrator | 2026-01-03 00:41:01.930702 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-03 00:41:01.930757 | orchestrator | Saturday 03 January 2026 00:40:54 +0000 (0:00:00.616) 0:00:00.760 ****** 2026-01-03 00:41:01.930849 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:01.930864 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:01.930881 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:01.930892 | orchestrator | 2026-01-03 00:41:01.930903 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-03 00:41:01.930914 | orchestrator | Saturday 03 January 2026 00:40:54 +0000 (0:00:00.367) 0:00:01.128 ****** 2026-01-03 00:41:01.930925 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:01.930937 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:01.930947 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:41:01.930958 | orchestrator | 2026-01-03 00:41:01.930970 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-03 00:41:01.930984 | orchestrator | Saturday 03 January 2026 00:40:55 +0000 (0:00:00.567) 0:00:01.695 ****** 2026-01-03 00:41:01.930997 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:01.931010 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:01.931022 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:01.931036 | orchestrator | 2026-01-03 00:41:01.931057 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-03 00:41:01.931076 | orchestrator | Saturday 03 January 2026 00:40:55 +0000 (0:00:00.261) 0:00:01.956 ****** 2026-01-03 00:41:01.931184 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:01.931214 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:01.931234 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:01.931253 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:01.931272 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:01.931290 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:01.931310 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:01.931327 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:01.931346 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:01.931365 | orchestrator | 2026-01-03 00:41:01.931386 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-03 00:41:01.931406 | orchestrator | Saturday 03 January 2026 00:40:56 +0000 (0:00:01.174) 0:00:03.131 ****** 2026-01-03 00:41:01.931426 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:01.931445 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:01.931463 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:01.931483 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:01.931501 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:01.931521 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:01.931541 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:01.931560 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:01.931581 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:01.931601 | orchestrator | 2026-01-03 00:41:01.931620 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-03 00:41:01.931639 | orchestrator | Saturday 03 January 2026 00:40:58 +0000 (0:00:01.624) 0:00:04.755 ****** 2026-01-03 00:41:01.931657 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:01.931675 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:01.931694 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:01.931713 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:01.931732 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:01.931763 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:01.931820 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:01.931859 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:01.931878 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:01.931896 | orchestrator | 2026-01-03 00:41:01.931915 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-03 00:41:01.931932 | orchestrator | Saturday 03 January 2026 00:41:00 +0000 (0:00:02.127) 0:00:06.883 ****** 2026-01-03 00:41:01.931950 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:01.931967 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:01.931983 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:01.932002 | orchestrator | 2026-01-03 00:41:01.932020 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-03 00:41:01.932038 | orchestrator | Saturday 03 January 2026 00:41:00 +0000 (0:00:00.635) 0:00:07.519 ****** 2026-01-03 00:41:01.932057 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:01.932074 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:01.932092 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:01.932109 | orchestrator | 2026-01-03 00:41:01.932128 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:41:01.932233 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:01.932257 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:01.932307 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:01.932327 | orchestrator | 2026-01-03 00:41:01.932344 | orchestrator | 2026-01-03 00:41:01.932361 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:41:01.932379 | orchestrator | Saturday 03 January 2026 00:41:01 +0000 (0:00:00.709) 0:00:08.228 ****** 2026-01-03 00:41:01.932397 | orchestrator | =============================================================================== 2026-01-03 00:41:01.932414 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-01-03 00:41:01.932431 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.62s 2026-01-03 00:41:01.932447 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2026-01-03 00:41:01.932465 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-01-03 00:41:01.932484 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-01-03 00:41:01.932502 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2026-01-03 00:41:01.932521 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-01-03 00:41:01.932540 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-01-03 00:41:01.932557 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-01-03 00:41:14.364076 | orchestrator | 2026-01-03 00:41:14 | INFO  | Task ed0ccd9b-08da-4c91-a332-90b3bc58ff6f (facts) was prepared for execution. 2026-01-03 00:41:14.364182 | orchestrator | 2026-01-03 00:41:14 | INFO  | It takes a moment until task ed0ccd9b-08da-4c91-a332-90b3bc58ff6f (facts) has been started and output is visible here. 2026-01-03 00:41:26.063831 | orchestrator | 2026-01-03 00:41:26.063943 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-03 00:41:26.063959 | orchestrator | 2026-01-03 00:41:26.063971 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:41:26.063982 | orchestrator | Saturday 03 January 2026 00:41:18 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-03 00:41:26.063992 | orchestrator | ok: [testbed-manager] 2026-01-03 00:41:26.064003 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:41:26.064013 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:41:26.064049 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:41:26.064060 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:26.064069 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:26.064079 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:41:26.064088 | orchestrator | 2026-01-03 00:41:26.064100 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:41:26.064110 | orchestrator | Saturday 03 January 2026 00:41:19 +0000 (0:00:00.900) 0:00:01.156 ****** 2026-01-03 00:41:26.064120 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:41:26.064130 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:41:26.064139 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:41:26.064149 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:41:26.064158 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:26.064168 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:26.064177 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:26.064187 | orchestrator | 2026-01-03 00:41:26.064196 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:41:26.064206 | orchestrator | 2026-01-03 00:41:26.064215 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:41:26.064225 | orchestrator | Saturday 03 January 2026 00:41:20 +0000 (0:00:00.904) 0:00:02.060 ****** 2026-01-03 00:41:26.064234 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:41:26.064244 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:41:26.064254 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:41:26.064264 | orchestrator | ok: [testbed-manager] 2026-01-03 00:41:26.064273 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:26.064283 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:26.064292 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:41:26.064302 | orchestrator | 2026-01-03 00:41:26.064312 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:41:26.064321 | orchestrator | 2026-01-03 00:41:26.064331 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:41:26.064355 | orchestrator | Saturday 03 January 2026 00:41:25 +0000 (0:00:04.838) 0:00:06.899 ****** 2026-01-03 00:41:26.064368 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:41:26.064379 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:41:26.064391 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:41:26.064403 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:41:26.064414 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:26.064425 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:26.064436 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:26.064447 | orchestrator | 2026-01-03 00:41:26.064459 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:41:26.064471 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064483 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064495 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064507 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064519 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064530 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064542 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:26.064554 | orchestrator | 2026-01-03 00:41:26.064573 | orchestrator | 2026-01-03 00:41:26.064585 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:41:26.064596 | orchestrator | Saturday 03 January 2026 00:41:25 +0000 (0:00:00.522) 0:00:07.421 ****** 2026-01-03 00:41:26.064608 | orchestrator | =============================================================================== 2026-01-03 00:41:26.064620 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2026-01-03 00:41:26.064632 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.90s 2026-01-03 00:41:26.064644 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2026-01-03 00:41:26.064656 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-03 00:41:28.391218 | orchestrator | 2026-01-03 00:41:28 | INFO  | Task 6d2f3b99-8fac-4b18-b27a-ca70bdbd8fe3 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-03 00:41:28.391319 | orchestrator | 2026-01-03 00:41:28 | INFO  | It takes a moment until task 6d2f3b99-8fac-4b18-b27a-ca70bdbd8fe3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-03 00:41:39.804656 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:41:39.804847 | orchestrator | 2.16.14 2026-01-03 00:41:39.804876 | orchestrator | 2026-01-03 00:41:39.804896 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:41:39.804916 | orchestrator | 2026-01-03 00:41:39.804938 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:41:39.804958 | orchestrator | Saturday 03 January 2026 00:41:32 +0000 (0:00:00.311) 0:00:00.312 ****** 2026-01-03 00:41:39.804978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:41:39.804998 | orchestrator | 2026-01-03 00:41:39.805016 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:41:39.805028 | orchestrator | Saturday 03 January 2026 00:41:33 +0000 (0:00:00.252) 0:00:00.564 ****** 2026-01-03 00:41:39.805039 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:39.805051 | orchestrator | 2026-01-03 00:41:39.805062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805073 | orchestrator | Saturday 03 January 2026 00:41:33 +0000 (0:00:00.225) 0:00:00.789 ****** 2026-01-03 00:41:39.805085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:41:39.805096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:41:39.805107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:41:39.805118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:41:39.805129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:41:39.805140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:41:39.805150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:41:39.805161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:41:39.805174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-03 00:41:39.805188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:41:39.805210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:41:39.805223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:41:39.805238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:41:39.805257 | orchestrator | 2026-01-03 00:41:39.805277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805325 | orchestrator | Saturday 03 January 2026 00:41:33 +0000 (0:00:00.461) 0:00:01.251 ****** 2026-01-03 00:41:39.805347 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805368 | orchestrator | 2026-01-03 00:41:39.805408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805434 | orchestrator | Saturday 03 January 2026 00:41:33 +0000 (0:00:00.186) 0:00:01.437 ****** 2026-01-03 00:41:39.805448 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805461 | orchestrator | 2026-01-03 00:41:39.805474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805491 | orchestrator | Saturday 03 January 2026 00:41:34 +0000 (0:00:00.207) 0:00:01.644 ****** 2026-01-03 00:41:39.805510 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805529 | orchestrator | 2026-01-03 00:41:39.805545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805568 | orchestrator | Saturday 03 January 2026 00:41:34 +0000 (0:00:00.194) 0:00:01.838 ****** 2026-01-03 00:41:39.805586 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805605 | orchestrator | 2026-01-03 00:41:39.805624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805644 | orchestrator | Saturday 03 January 2026 00:41:34 +0000 (0:00:00.203) 0:00:02.041 ****** 2026-01-03 00:41:39.805663 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805681 | orchestrator | 2026-01-03 00:41:39.805700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805718 | orchestrator | Saturday 03 January 2026 00:41:34 +0000 (0:00:00.201) 0:00:02.243 ****** 2026-01-03 00:41:39.805763 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805781 | orchestrator | 2026-01-03 00:41:39.805799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805818 | orchestrator | Saturday 03 January 2026 00:41:34 +0000 (0:00:00.190) 0:00:02.434 ****** 2026-01-03 00:41:39.805837 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805856 | orchestrator | 2026-01-03 00:41:39.805874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805893 | orchestrator | Saturday 03 January 2026 00:41:35 +0000 (0:00:00.199) 0:00:02.633 ****** 2026-01-03 00:41:39.805905 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.805915 | orchestrator | 2026-01-03 00:41:39.805926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.805937 | orchestrator | Saturday 03 January 2026 00:41:35 +0000 (0:00:00.203) 0:00:02.837 ****** 2026-01-03 00:41:39.805948 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba) 2026-01-03 00:41:39.805960 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba) 2026-01-03 00:41:39.805970 | orchestrator | 2026-01-03 00:41:39.805981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.806072 | orchestrator | Saturday 03 January 2026 00:41:35 +0000 (0:00:00.386) 0:00:03.224 ****** 2026-01-03 00:41:39.806088 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338) 2026-01-03 00:41:39.806099 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338) 2026-01-03 00:41:39.806111 | orchestrator | 2026-01-03 00:41:39.806122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.806133 | orchestrator | Saturday 03 January 2026 00:41:36 +0000 (0:00:00.613) 0:00:03.837 ****** 2026-01-03 00:41:39.806144 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743) 2026-01-03 00:41:39.806155 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743) 2026-01-03 00:41:39.806166 | orchestrator | 2026-01-03 00:41:39.806177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.806200 | orchestrator | Saturday 03 January 2026 00:41:36 +0000 (0:00:00.596) 0:00:04.434 ****** 2026-01-03 00:41:39.806211 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25) 2026-01-03 00:41:39.806222 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25) 2026-01-03 00:41:39.806233 | orchestrator | 2026-01-03 00:41:39.806244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:39.806255 | orchestrator | Saturday 03 January 2026 00:41:37 +0000 (0:00:00.787) 0:00:05.221 ****** 2026-01-03 00:41:39.806266 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:41:39.806277 | orchestrator | 2026-01-03 00:41:39.806295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806306 | orchestrator | Saturday 03 January 2026 00:41:38 +0000 (0:00:00.330) 0:00:05.552 ****** 2026-01-03 00:41:39.806317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:41:39.806329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:41:39.806340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:41:39.806351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:41:39.806362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:41:39.806372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:41:39.806383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:41:39.806394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:41:39.806405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-03 00:41:39.806416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:41:39.806427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:41:39.806438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:41:39.806449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:41:39.806459 | orchestrator | 2026-01-03 00:41:39.806470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806482 | orchestrator | Saturday 03 January 2026 00:41:38 +0000 (0:00:00.396) 0:00:05.949 ****** 2026-01-03 00:41:39.806493 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806503 | orchestrator | 2026-01-03 00:41:39.806514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806525 | orchestrator | Saturday 03 January 2026 00:41:38 +0000 (0:00:00.201) 0:00:06.150 ****** 2026-01-03 00:41:39.806536 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806547 | orchestrator | 2026-01-03 00:41:39.806558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806569 | orchestrator | Saturday 03 January 2026 00:41:38 +0000 (0:00:00.207) 0:00:06.358 ****** 2026-01-03 00:41:39.806579 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806590 | orchestrator | 2026-01-03 00:41:39.806601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806612 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.200) 0:00:06.559 ****** 2026-01-03 00:41:39.806623 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806634 | orchestrator | 2026-01-03 00:41:39.806645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806656 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.185) 0:00:06.744 ****** 2026-01-03 00:41:39.806674 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806684 | orchestrator | 2026-01-03 00:41:39.806695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806706 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.190) 0:00:06.935 ****** 2026-01-03 00:41:39.806717 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806727 | orchestrator | 2026-01-03 00:41:39.806793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:39.806807 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.199) 0:00:07.135 ****** 2026-01-03 00:41:39.806827 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:39.806846 | orchestrator | 2026-01-03 00:41:39.806877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.036968 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.193) 0:00:07.328 ****** 2026-01-03 00:41:47.037081 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037097 | orchestrator | 2026-01-03 00:41:47.037110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.037122 | orchestrator | Saturday 03 January 2026 00:41:39 +0000 (0:00:00.200) 0:00:07.529 ****** 2026-01-03 00:41:47.037134 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-03 00:41:47.037145 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-03 00:41:47.037157 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-03 00:41:47.037168 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-03 00:41:47.037179 | orchestrator | 2026-01-03 00:41:47.037190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.037201 | orchestrator | Saturday 03 January 2026 00:41:40 +0000 (0:00:00.960) 0:00:08.489 ****** 2026-01-03 00:41:47.037212 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037223 | orchestrator | 2026-01-03 00:41:47.037234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.037245 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.203) 0:00:08.692 ****** 2026-01-03 00:41:47.037256 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037267 | orchestrator | 2026-01-03 00:41:47.037278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.037289 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.201) 0:00:08.894 ****** 2026-01-03 00:41:47.037300 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037337 | orchestrator | 2026-01-03 00:41:47.037349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:47.037360 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.214) 0:00:09.109 ****** 2026-01-03 00:41:47.037371 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037381 | orchestrator | 2026-01-03 00:41:47.037392 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:41:47.037403 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.207) 0:00:09.316 ****** 2026-01-03 00:41:47.037414 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:41:47.037425 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:41:47.037436 | orchestrator | 2026-01-03 00:41:47.037467 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:41:47.037479 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.162) 0:00:09.479 ****** 2026-01-03 00:41:47.037490 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037503 | orchestrator | 2026-01-03 00:41:47.037516 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:41:47.037530 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.141) 0:00:09.620 ****** 2026-01-03 00:41:47.037542 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037555 | orchestrator | 2026-01-03 00:41:47.037568 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:41:47.037600 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.121) 0:00:09.742 ****** 2026-01-03 00:41:47.037614 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037627 | orchestrator | 2026-01-03 00:41:47.037640 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:41:47.037653 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.137) 0:00:09.880 ****** 2026-01-03 00:41:47.037666 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:47.037678 | orchestrator | 2026-01-03 00:41:47.037691 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:41:47.037703 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.144) 0:00:10.024 ****** 2026-01-03 00:41:47.037717 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c38584cd-f033-5ed2-9691-83456ad614b7'}}) 2026-01-03 00:41:47.037778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}}) 2026-01-03 00:41:47.037795 | orchestrator | 2026-01-03 00:41:47.037819 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:41:47.037845 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.161) 0:00:10.186 ****** 2026-01-03 00:41:47.037863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c38584cd-f033-5ed2-9691-83456ad614b7'}})  2026-01-03 00:41:47.037891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}})  2026-01-03 00:41:47.037909 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.037928 | orchestrator | 2026-01-03 00:41:47.037946 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:41:47.037964 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.150) 0:00:10.336 ****** 2026-01-03 00:41:47.037977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c38584cd-f033-5ed2-9691-83456ad614b7'}})  2026-01-03 00:41:47.037988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}})  2026-01-03 00:41:47.037999 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038010 | orchestrator | 2026-01-03 00:41:47.038090 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:41:47.038101 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.326) 0:00:10.662 ****** 2026-01-03 00:41:47.038112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c38584cd-f033-5ed2-9691-83456ad614b7'}})  2026-01-03 00:41:47.038144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}})  2026-01-03 00:41:47.038155 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038166 | orchestrator | 2026-01-03 00:41:47.038177 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:41:47.038196 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.150) 0:00:10.813 ****** 2026-01-03 00:41:47.038208 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:47.038218 | orchestrator | 2026-01-03 00:41:47.038229 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:41:47.038240 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.130) 0:00:10.944 ****** 2026-01-03 00:41:47.038251 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:47.038262 | orchestrator | 2026-01-03 00:41:47.038272 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:41:47.038283 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.142) 0:00:11.087 ****** 2026-01-03 00:41:47.038294 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038304 | orchestrator | 2026-01-03 00:41:47.038315 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:41:47.038326 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.131) 0:00:11.218 ****** 2026-01-03 00:41:47.038348 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038359 | orchestrator | 2026-01-03 00:41:47.038370 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:41:47.038381 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.128) 0:00:11.346 ****** 2026-01-03 00:41:47.038392 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038403 | orchestrator | 2026-01-03 00:41:47.038414 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:41:47.038424 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.135) 0:00:11.482 ****** 2026-01-03 00:41:47.038435 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:41:47.038446 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:41:47.038457 | orchestrator |  "sdb": { 2026-01-03 00:41:47.038468 | orchestrator |  "osd_lvm_uuid": "c38584cd-f033-5ed2-9691-83456ad614b7" 2026-01-03 00:41:47.038479 | orchestrator |  }, 2026-01-03 00:41:47.038489 | orchestrator |  "sdc": { 2026-01-03 00:41:47.038500 | orchestrator |  "osd_lvm_uuid": "d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898" 2026-01-03 00:41:47.038511 | orchestrator |  } 2026-01-03 00:41:47.038522 | orchestrator |  } 2026-01-03 00:41:47.038532 | orchestrator | } 2026-01-03 00:41:47.038543 | orchestrator | 2026-01-03 00:41:47.038554 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:41:47.038565 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:00.132) 0:00:11.615 ****** 2026-01-03 00:41:47.038576 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038587 | orchestrator | 2026-01-03 00:41:47.038597 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:41:47.038608 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:00.116) 0:00:11.732 ****** 2026-01-03 00:41:47.038619 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038630 | orchestrator | 2026-01-03 00:41:47.038641 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:41:47.038652 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:00.131) 0:00:11.863 ****** 2026-01-03 00:41:47.038662 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:47.038673 | orchestrator | 2026-01-03 00:41:47.038684 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:41:47.038695 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:00.120) 0:00:11.983 ****** 2026-01-03 00:41:47.038705 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 00:41:47.038716 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:41:47.038727 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:41:47.038787 | orchestrator |  "sdb": { 2026-01-03 00:41:47.038798 | orchestrator |  "osd_lvm_uuid": "c38584cd-f033-5ed2-9691-83456ad614b7" 2026-01-03 00:41:47.038810 | orchestrator |  }, 2026-01-03 00:41:47.038833 | orchestrator |  "sdc": { 2026-01-03 00:41:47.038844 | orchestrator |  "osd_lvm_uuid": "d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898" 2026-01-03 00:41:47.038855 | orchestrator |  } 2026-01-03 00:41:47.038866 | orchestrator |  }, 2026-01-03 00:41:47.038877 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:41:47.038888 | orchestrator |  { 2026-01-03 00:41:47.038899 | orchestrator |  "data": "osd-block-c38584cd-f033-5ed2-9691-83456ad614b7", 2026-01-03 00:41:47.038910 | orchestrator |  "data_vg": "ceph-c38584cd-f033-5ed2-9691-83456ad614b7" 2026-01-03 00:41:47.038920 | orchestrator |  }, 2026-01-03 00:41:47.038931 | orchestrator |  { 2026-01-03 00:41:47.038942 | orchestrator |  "data": "osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898", 2026-01-03 00:41:47.038952 | orchestrator |  "data_vg": "ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898" 2026-01-03 00:41:47.038969 | orchestrator |  } 2026-01-03 00:41:47.038980 | orchestrator |  ] 2026-01-03 00:41:47.038991 | orchestrator |  } 2026-01-03 00:41:47.039010 | orchestrator | } 2026-01-03 00:41:47.039020 | orchestrator | 2026-01-03 00:41:47.039031 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:41:47.039042 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:00.353) 0:00:12.337 ****** 2026-01-03 00:41:47.039053 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:41:47.039064 | orchestrator | 2026-01-03 00:41:47.039075 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:41:47.039085 | orchestrator | 2026-01-03 00:41:47.039096 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:41:47.039107 | orchestrator | Saturday 03 January 2026 00:41:46 +0000 (0:00:01.725) 0:00:14.062 ****** 2026-01-03 00:41:47.039118 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:41:47.039129 | orchestrator | 2026-01-03 00:41:47.039139 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:41:47.039150 | orchestrator | Saturday 03 January 2026 00:41:46 +0000 (0:00:00.265) 0:00:14.328 ****** 2026-01-03 00:41:47.039161 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:47.039172 | orchestrator | 2026-01-03 00:41:47.039191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105446 | orchestrator | Saturday 03 January 2026 00:41:47 +0000 (0:00:00.235) 0:00:14.564 ****** 2026-01-03 00:41:55.105526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:41:55.105535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:41:55.105541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:41:55.105546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:41:55.105551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:41:55.105556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:41:55.105560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:41:55.105565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:41:55.105570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-03 00:41:55.105574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:41:55.105579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:41:55.105586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:41:55.105591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:41:55.105596 | orchestrator | 2026-01-03 00:41:55.105602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105606 | orchestrator | Saturday 03 January 2026 00:41:47 +0000 (0:00:00.377) 0:00:14.941 ****** 2026-01-03 00:41:55.105611 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105617 | orchestrator | 2026-01-03 00:41:55.105622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105626 | orchestrator | Saturday 03 January 2026 00:41:47 +0000 (0:00:00.206) 0:00:15.148 ****** 2026-01-03 00:41:55.105631 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105636 | orchestrator | 2026-01-03 00:41:55.105640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105645 | orchestrator | Saturday 03 January 2026 00:41:47 +0000 (0:00:00.198) 0:00:15.347 ****** 2026-01-03 00:41:55.105650 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105654 | orchestrator | 2026-01-03 00:41:55.105659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105681 | orchestrator | Saturday 03 January 2026 00:41:47 +0000 (0:00:00.178) 0:00:15.525 ****** 2026-01-03 00:41:55.105686 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105691 | orchestrator | 2026-01-03 00:41:55.105695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105700 | orchestrator | Saturday 03 January 2026 00:41:48 +0000 (0:00:00.199) 0:00:15.725 ****** 2026-01-03 00:41:55.105705 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105709 | orchestrator | 2026-01-03 00:41:55.105714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105718 | orchestrator | Saturday 03 January 2026 00:41:48 +0000 (0:00:00.572) 0:00:16.298 ****** 2026-01-03 00:41:55.105803 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105810 | orchestrator | 2026-01-03 00:41:55.105828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105833 | orchestrator | Saturday 03 January 2026 00:41:48 +0000 (0:00:00.193) 0:00:16.491 ****** 2026-01-03 00:41:55.105838 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105842 | orchestrator | 2026-01-03 00:41:55.105847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105851 | orchestrator | Saturday 03 January 2026 00:41:49 +0000 (0:00:00.196) 0:00:16.688 ****** 2026-01-03 00:41:55.105856 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.105861 | orchestrator | 2026-01-03 00:41:55.105872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105877 | orchestrator | Saturday 03 January 2026 00:41:49 +0000 (0:00:00.188) 0:00:16.876 ****** 2026-01-03 00:41:55.105882 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0) 2026-01-03 00:41:55.105887 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0) 2026-01-03 00:41:55.105892 | orchestrator | 2026-01-03 00:41:55.105897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105901 | orchestrator | Saturday 03 January 2026 00:41:49 +0000 (0:00:00.410) 0:00:17.287 ****** 2026-01-03 00:41:55.105906 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04) 2026-01-03 00:41:55.105910 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04) 2026-01-03 00:41:55.105915 | orchestrator | 2026-01-03 00:41:55.105919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105924 | orchestrator | Saturday 03 January 2026 00:41:50 +0000 (0:00:00.464) 0:00:17.752 ****** 2026-01-03 00:41:55.105928 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4) 2026-01-03 00:41:55.105933 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4) 2026-01-03 00:41:55.105937 | orchestrator | 2026-01-03 00:41:55.105942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105958 | orchestrator | Saturday 03 January 2026 00:41:50 +0000 (0:00:00.404) 0:00:18.156 ****** 2026-01-03 00:41:55.105963 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5) 2026-01-03 00:41:55.105968 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5) 2026-01-03 00:41:55.105973 | orchestrator | 2026-01-03 00:41:55.105977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:41:55.105982 | orchestrator | Saturday 03 January 2026 00:41:51 +0000 (0:00:00.406) 0:00:18.562 ****** 2026-01-03 00:41:55.105986 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:41:55.105991 | orchestrator | 2026-01-03 00:41:55.105995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106000 | orchestrator | Saturday 03 January 2026 00:41:51 +0000 (0:00:00.309) 0:00:18.871 ****** 2026-01-03 00:41:55.106010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:41:55.106055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:41:55.106061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:41:55.106066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:41:55.106072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:41:55.106077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:41:55.106082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:41:55.106087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:41:55.106092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-03 00:41:55.106098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:41:55.106103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:41:55.106108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:41:55.106113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:41:55.106118 | orchestrator | 2026-01-03 00:41:55.106124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106129 | orchestrator | Saturday 03 January 2026 00:41:51 +0000 (0:00:00.383) 0:00:19.255 ****** 2026-01-03 00:41:55.106135 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106140 | orchestrator | 2026-01-03 00:41:55.106145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106154 | orchestrator | Saturday 03 January 2026 00:41:52 +0000 (0:00:00.692) 0:00:19.947 ****** 2026-01-03 00:41:55.106160 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106164 | orchestrator | 2026-01-03 00:41:55.106170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106175 | orchestrator | Saturday 03 January 2026 00:41:52 +0000 (0:00:00.203) 0:00:20.150 ****** 2026-01-03 00:41:55.106180 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106185 | orchestrator | 2026-01-03 00:41:55.106190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106196 | orchestrator | Saturday 03 January 2026 00:41:52 +0000 (0:00:00.221) 0:00:20.372 ****** 2026-01-03 00:41:55.106201 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106207 | orchestrator | 2026-01-03 00:41:55.106212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106217 | orchestrator | Saturday 03 January 2026 00:41:53 +0000 (0:00:00.266) 0:00:20.638 ****** 2026-01-03 00:41:55.106223 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106227 | orchestrator | 2026-01-03 00:41:55.106233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106238 | orchestrator | Saturday 03 January 2026 00:41:53 +0000 (0:00:00.207) 0:00:20.845 ****** 2026-01-03 00:41:55.106243 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106248 | orchestrator | 2026-01-03 00:41:55.106253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106258 | orchestrator | Saturday 03 January 2026 00:41:53 +0000 (0:00:00.217) 0:00:21.062 ****** 2026-01-03 00:41:55.106264 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106269 | orchestrator | 2026-01-03 00:41:55.106274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106279 | orchestrator | Saturday 03 January 2026 00:41:53 +0000 (0:00:00.301) 0:00:21.364 ****** 2026-01-03 00:41:55.106289 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:55.106295 | orchestrator | 2026-01-03 00:41:55.106300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106305 | orchestrator | Saturday 03 January 2026 00:41:54 +0000 (0:00:00.205) 0:00:21.570 ****** 2026-01-03 00:41:55.106311 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-03 00:41:55.106316 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-03 00:41:55.106322 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-03 00:41:55.106327 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-03 00:41:55.106333 | orchestrator | 2026-01-03 00:41:55.106338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:41:55.106343 | orchestrator | Saturday 03 January 2026 00:41:54 +0000 (0:00:00.858) 0:00:22.428 ****** 2026-01-03 00:41:55.106348 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.586659 | orchestrator | 2026-01-03 00:42:00.586823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:00.586860 | orchestrator | Saturday 03 January 2026 00:41:55 +0000 (0:00:00.205) 0:00:22.633 ****** 2026-01-03 00:42:00.586884 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.586905 | orchestrator | 2026-01-03 00:42:00.586925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:00.586946 | orchestrator | Saturday 03 January 2026 00:41:55 +0000 (0:00:00.203) 0:00:22.837 ****** 2026-01-03 00:42:00.586967 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.586988 | orchestrator | 2026-01-03 00:42:00.587008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:00.587028 | orchestrator | Saturday 03 January 2026 00:41:55 +0000 (0:00:00.197) 0:00:23.035 ****** 2026-01-03 00:42:00.587049 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587070 | orchestrator | 2026-01-03 00:42:00.587090 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:42:00.587111 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.698) 0:00:23.733 ****** 2026-01-03 00:42:00.587131 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:42:00.587150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:42:00.587171 | orchestrator | 2026-01-03 00:42:00.587192 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:42:00.587214 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.136) 0:00:23.870 ****** 2026-01-03 00:42:00.587237 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587259 | orchestrator | 2026-01-03 00:42:00.587281 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:42:00.587302 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.112) 0:00:23.983 ****** 2026-01-03 00:42:00.587324 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587346 | orchestrator | 2026-01-03 00:42:00.587367 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:42:00.587389 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.104) 0:00:24.088 ****** 2026-01-03 00:42:00.587410 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587431 | orchestrator | 2026-01-03 00:42:00.587453 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:42:00.587475 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.116) 0:00:24.205 ****** 2026-01-03 00:42:00.587497 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:00.587520 | orchestrator | 2026-01-03 00:42:00.587542 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:42:00.587565 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.113) 0:00:24.318 ****** 2026-01-03 00:42:00.587586 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85e74b82-cd6e-500e-9461-b867f1cfbb6a'}}) 2026-01-03 00:42:00.587608 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}}) 2026-01-03 00:42:00.587662 | orchestrator | 2026-01-03 00:42:00.587684 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:42:00.587705 | orchestrator | Saturday 03 January 2026 00:41:56 +0000 (0:00:00.138) 0:00:24.457 ****** 2026-01-03 00:42:00.587752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85e74b82-cd6e-500e-9461-b867f1cfbb6a'}})  2026-01-03 00:42:00.587793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}})  2026-01-03 00:42:00.587813 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587830 | orchestrator | 2026-01-03 00:42:00.587848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:42:00.587866 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.133) 0:00:24.590 ****** 2026-01-03 00:42:00.587883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85e74b82-cd6e-500e-9461-b867f1cfbb6a'}})  2026-01-03 00:42:00.587901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}})  2026-01-03 00:42:00.587920 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.587939 | orchestrator | 2026-01-03 00:42:00.587957 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:42:00.587976 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.138) 0:00:24.729 ****** 2026-01-03 00:42:00.587995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85e74b82-cd6e-500e-9461-b867f1cfbb6a'}})  2026-01-03 00:42:00.588014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}})  2026-01-03 00:42:00.588032 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588051 | orchestrator | 2026-01-03 00:42:00.588069 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:42:00.588087 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.137) 0:00:24.866 ****** 2026-01-03 00:42:00.588105 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:00.588123 | orchestrator | 2026-01-03 00:42:00.588141 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:42:00.588160 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.116) 0:00:24.983 ****** 2026-01-03 00:42:00.588179 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:00.588198 | orchestrator | 2026-01-03 00:42:00.588216 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:42:00.588235 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.102) 0:00:25.085 ****** 2026-01-03 00:42:00.588277 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588295 | orchestrator | 2026-01-03 00:42:00.588314 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:42:00.588333 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.229) 0:00:25.314 ****** 2026-01-03 00:42:00.588351 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588370 | orchestrator | 2026-01-03 00:42:00.588389 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:42:00.588407 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.096) 0:00:25.411 ****** 2026-01-03 00:42:00.588426 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588445 | orchestrator | 2026-01-03 00:42:00.588463 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:42:00.588482 | orchestrator | Saturday 03 January 2026 00:41:57 +0000 (0:00:00.093) 0:00:25.504 ****** 2026-01-03 00:42:00.588500 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:42:00.588519 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:00.588538 | orchestrator |  "sdb": { 2026-01-03 00:42:00.588557 | orchestrator |  "osd_lvm_uuid": "85e74b82-cd6e-500e-9461-b867f1cfbb6a" 2026-01-03 00:42:00.588590 | orchestrator |  }, 2026-01-03 00:42:00.588608 | orchestrator |  "sdc": { 2026-01-03 00:42:00.588626 | orchestrator |  "osd_lvm_uuid": "1ae59360-fa3d-59bd-b3b8-51590acdfd6e" 2026-01-03 00:42:00.588645 | orchestrator |  } 2026-01-03 00:42:00.588663 | orchestrator |  } 2026-01-03 00:42:00.588681 | orchestrator | } 2026-01-03 00:42:00.588700 | orchestrator | 2026-01-03 00:42:00.588767 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:42:00.588789 | orchestrator | Saturday 03 January 2026 00:41:58 +0000 (0:00:00.109) 0:00:25.613 ****** 2026-01-03 00:42:00.588807 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588824 | orchestrator | 2026-01-03 00:42:00.588842 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:42:00.588860 | orchestrator | Saturday 03 January 2026 00:41:58 +0000 (0:00:00.091) 0:00:25.705 ****** 2026-01-03 00:42:00.588879 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588897 | orchestrator | 2026-01-03 00:42:00.588915 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:42:00.588934 | orchestrator | Saturday 03 January 2026 00:41:58 +0000 (0:00:00.092) 0:00:25.798 ****** 2026-01-03 00:42:00.588952 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:00.588970 | orchestrator | 2026-01-03 00:42:00.588988 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:42:00.589007 | orchestrator | Saturday 03 January 2026 00:41:58 +0000 (0:00:00.096) 0:00:25.894 ****** 2026-01-03 00:42:00.589025 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 00:42:00.589044 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:42:00.589062 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:00.589081 | orchestrator |  "sdb": { 2026-01-03 00:42:00.589099 | orchestrator |  "osd_lvm_uuid": "85e74b82-cd6e-500e-9461-b867f1cfbb6a" 2026-01-03 00:42:00.589118 | orchestrator |  }, 2026-01-03 00:42:00.589136 | orchestrator |  "sdc": { 2026-01-03 00:42:00.589155 | orchestrator |  "osd_lvm_uuid": "1ae59360-fa3d-59bd-b3b8-51590acdfd6e" 2026-01-03 00:42:00.589173 | orchestrator |  } 2026-01-03 00:42:00.589191 | orchestrator |  }, 2026-01-03 00:42:00.589210 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:42:00.589228 | orchestrator |  { 2026-01-03 00:42:00.589246 | orchestrator |  "data": "osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a", 2026-01-03 00:42:00.589264 | orchestrator |  "data_vg": "ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a" 2026-01-03 00:42:00.589283 | orchestrator |  }, 2026-01-03 00:42:00.589301 | orchestrator |  { 2026-01-03 00:42:00.589319 | orchestrator |  "data": "osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e", 2026-01-03 00:42:00.589337 | orchestrator |  "data_vg": "ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e" 2026-01-03 00:42:00.589355 | orchestrator |  } 2026-01-03 00:42:00.589374 | orchestrator |  ] 2026-01-03 00:42:00.589392 | orchestrator |  } 2026-01-03 00:42:00.589410 | orchestrator | } 2026-01-03 00:42:00.589429 | orchestrator | 2026-01-03 00:42:00.589447 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:42:00.589465 | orchestrator | Saturday 03 January 2026 00:41:58 +0000 (0:00:00.152) 0:00:26.047 ****** 2026-01-03 00:42:00.589485 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:00.589504 | orchestrator | 2026-01-03 00:42:00.589523 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:42:00.589542 | orchestrator | 2026-01-03 00:42:00.589561 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:42:00.589581 | orchestrator | Saturday 03 January 2026 00:41:59 +0000 (0:00:00.995) 0:00:27.042 ****** 2026-01-03 00:42:00.589601 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:00.589619 | orchestrator | 2026-01-03 00:42:00.589637 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:42:00.589676 | orchestrator | Saturday 03 January 2026 00:42:00 +0000 (0:00:00.559) 0:00:27.602 ****** 2026-01-03 00:42:00.589697 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:00.589716 | orchestrator | 2026-01-03 00:42:00.589792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:00.589811 | orchestrator | Saturday 03 January 2026 00:42:00 +0000 (0:00:00.202) 0:00:27.805 ****** 2026-01-03 00:42:00.589829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:42:00.589848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:42:00.589866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:42:00.589885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:42:00.589903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:42:00.589936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:42:07.421134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:42:07.421954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:42:07.421982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-03 00:42:07.421988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:42:07.421994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:42:07.422000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:42:07.422005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:42:07.422011 | orchestrator | 2026-01-03 00:42:07.422044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422051 | orchestrator | Saturday 03 January 2026 00:42:00 +0000 (0:00:00.309) 0:00:28.115 ****** 2026-01-03 00:42:07.422056 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422063 | orchestrator | 2026-01-03 00:42:07.422069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422074 | orchestrator | Saturday 03 January 2026 00:42:00 +0000 (0:00:00.146) 0:00:28.261 ****** 2026-01-03 00:42:07.422078 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422082 | orchestrator | 2026-01-03 00:42:07.422087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422092 | orchestrator | Saturday 03 January 2026 00:42:00 +0000 (0:00:00.165) 0:00:28.426 ****** 2026-01-03 00:42:07.422096 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422101 | orchestrator | 2026-01-03 00:42:07.422105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422109 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.167) 0:00:28.594 ****** 2026-01-03 00:42:07.422114 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422118 | orchestrator | 2026-01-03 00:42:07.422124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422132 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.165) 0:00:28.759 ****** 2026-01-03 00:42:07.422139 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422146 | orchestrator | 2026-01-03 00:42:07.422154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422162 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.170) 0:00:28.930 ****** 2026-01-03 00:42:07.422170 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422178 | orchestrator | 2026-01-03 00:42:07.422186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422215 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.222) 0:00:29.153 ****** 2026-01-03 00:42:07.422220 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422225 | orchestrator | 2026-01-03 00:42:07.422229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422234 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.180) 0:00:29.334 ****** 2026-01-03 00:42:07.422239 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422243 | orchestrator | 2026-01-03 00:42:07.422248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422252 | orchestrator | Saturday 03 January 2026 00:42:01 +0000 (0:00:00.177) 0:00:29.511 ****** 2026-01-03 00:42:07.422257 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b) 2026-01-03 00:42:07.422262 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b) 2026-01-03 00:42:07.422267 | orchestrator | 2026-01-03 00:42:07.422271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422275 | orchestrator | Saturday 03 January 2026 00:42:02 +0000 (0:00:00.664) 0:00:30.175 ****** 2026-01-03 00:42:07.422280 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0) 2026-01-03 00:42:07.422284 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0) 2026-01-03 00:42:07.422288 | orchestrator | 2026-01-03 00:42:07.422292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422296 | orchestrator | Saturday 03 January 2026 00:42:03 +0000 (0:00:00.373) 0:00:30.549 ****** 2026-01-03 00:42:07.422301 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c) 2026-01-03 00:42:07.422305 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c) 2026-01-03 00:42:07.422309 | orchestrator | 2026-01-03 00:42:07.422314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422318 | orchestrator | Saturday 03 January 2026 00:42:03 +0000 (0:00:00.396) 0:00:30.946 ****** 2026-01-03 00:42:07.422322 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd) 2026-01-03 00:42:07.422326 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd) 2026-01-03 00:42:07.422331 | orchestrator | 2026-01-03 00:42:07.422335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:07.422339 | orchestrator | Saturday 03 January 2026 00:42:03 +0000 (0:00:00.323) 0:00:31.270 ****** 2026-01-03 00:42:07.422343 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:42:07.422348 | orchestrator | 2026-01-03 00:42:07.422352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422372 | orchestrator | Saturday 03 January 2026 00:42:04 +0000 (0:00:00.280) 0:00:31.550 ****** 2026-01-03 00:42:07.422377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:42:07.422381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:42:07.422385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:42:07.422390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:42:07.422394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:42:07.422410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:42:07.422415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:42:07.422419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:42:07.422428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-03 00:42:07.422432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:42:07.422436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:42:07.422440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:42:07.422445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:42:07.422449 | orchestrator | 2026-01-03 00:42:07.422453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422458 | orchestrator | Saturday 03 January 2026 00:42:04 +0000 (0:00:00.312) 0:00:31.863 ****** 2026-01-03 00:42:07.422462 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422466 | orchestrator | 2026-01-03 00:42:07.422471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422475 | orchestrator | Saturday 03 January 2026 00:42:04 +0000 (0:00:00.230) 0:00:32.094 ****** 2026-01-03 00:42:07.422479 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422483 | orchestrator | 2026-01-03 00:42:07.422488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422495 | orchestrator | Saturday 03 January 2026 00:42:04 +0000 (0:00:00.193) 0:00:32.287 ****** 2026-01-03 00:42:07.422499 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422504 | orchestrator | 2026-01-03 00:42:07.422508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422512 | orchestrator | Saturday 03 January 2026 00:42:04 +0000 (0:00:00.195) 0:00:32.483 ****** 2026-01-03 00:42:07.422517 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422521 | orchestrator | 2026-01-03 00:42:07.422525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422530 | orchestrator | Saturday 03 January 2026 00:42:05 +0000 (0:00:00.170) 0:00:32.653 ****** 2026-01-03 00:42:07.422534 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422538 | orchestrator | 2026-01-03 00:42:07.422542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422547 | orchestrator | Saturday 03 January 2026 00:42:05 +0000 (0:00:00.178) 0:00:32.832 ****** 2026-01-03 00:42:07.422551 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422555 | orchestrator | 2026-01-03 00:42:07.422560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422564 | orchestrator | Saturday 03 January 2026 00:42:05 +0000 (0:00:00.484) 0:00:33.316 ****** 2026-01-03 00:42:07.422568 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422572 | orchestrator | 2026-01-03 00:42:07.422577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422581 | orchestrator | Saturday 03 January 2026 00:42:05 +0000 (0:00:00.207) 0:00:33.524 ****** 2026-01-03 00:42:07.422585 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422590 | orchestrator | 2026-01-03 00:42:07.422594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422598 | orchestrator | Saturday 03 January 2026 00:42:06 +0000 (0:00:00.185) 0:00:33.709 ****** 2026-01-03 00:42:07.422603 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-03 00:42:07.422607 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-03 00:42:07.422612 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-03 00:42:07.422616 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-03 00:42:07.422621 | orchestrator | 2026-01-03 00:42:07.422625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422629 | orchestrator | Saturday 03 January 2026 00:42:06 +0000 (0:00:00.566) 0:00:34.276 ****** 2026-01-03 00:42:07.422633 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422641 | orchestrator | 2026-01-03 00:42:07.422645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422650 | orchestrator | Saturday 03 January 2026 00:42:06 +0000 (0:00:00.164) 0:00:34.441 ****** 2026-01-03 00:42:07.422654 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422658 | orchestrator | 2026-01-03 00:42:07.422663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422667 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.180) 0:00:34.622 ****** 2026-01-03 00:42:07.422671 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422675 | orchestrator | 2026-01-03 00:42:07.422680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:07.422684 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.163) 0:00:34.785 ****** 2026-01-03 00:42:07.422688 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:07.422693 | orchestrator | 2026-01-03 00:42:07.422700 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:42:10.990535 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.164) 0:00:34.949 ****** 2026-01-03 00:42:10.990625 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:42:10.990638 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:42:10.990647 | orchestrator | 2026-01-03 00:42:10.990656 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:42:10.990664 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.138) 0:00:35.087 ****** 2026-01-03 00:42:10.990673 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.990681 | orchestrator | 2026-01-03 00:42:10.990689 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:42:10.990697 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.106) 0:00:35.194 ****** 2026-01-03 00:42:10.990704 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.990761 | orchestrator | 2026-01-03 00:42:10.990770 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:42:10.990778 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:00.105) 0:00:35.299 ****** 2026-01-03 00:42:10.990786 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.990793 | orchestrator | 2026-01-03 00:42:10.990801 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:42:10.990809 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.268) 0:00:35.568 ****** 2026-01-03 00:42:10.990817 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:10.990825 | orchestrator | 2026-01-03 00:42:10.990834 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:42:10.990841 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.140) 0:00:35.708 ****** 2026-01-03 00:42:10.990850 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}}) 2026-01-03 00:42:10.990858 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45670551-be8c-5463-bb13-3841732d7282'}}) 2026-01-03 00:42:10.990869 | orchestrator | 2026-01-03 00:42:10.990876 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:42:10.990884 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.159) 0:00:35.868 ****** 2026-01-03 00:42:10.990893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}})  2026-01-03 00:42:10.990903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45670551-be8c-5463-bb13-3841732d7282'}})  2026-01-03 00:42:10.990911 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.990919 | orchestrator | 2026-01-03 00:42:10.990927 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:42:10.990935 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.151) 0:00:36.019 ****** 2026-01-03 00:42:10.991007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}})  2026-01-03 00:42:10.991019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45670551-be8c-5463-bb13-3841732d7282'}})  2026-01-03 00:42:10.991027 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991034 | orchestrator | 2026-01-03 00:42:10.991042 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:42:10.991050 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.142) 0:00:36.162 ****** 2026-01-03 00:42:10.991073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}})  2026-01-03 00:42:10.991083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45670551-be8c-5463-bb13-3841732d7282'}})  2026-01-03 00:42:10.991091 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991098 | orchestrator | 2026-01-03 00:42:10.991106 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:42:10.991114 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.145) 0:00:36.307 ****** 2026-01-03 00:42:10.991123 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:10.991130 | orchestrator | 2026-01-03 00:42:10.991138 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:42:10.991145 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:00.156) 0:00:36.464 ****** 2026-01-03 00:42:10.991153 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:10.991161 | orchestrator | 2026-01-03 00:42:10.991170 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:42:10.991178 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.136) 0:00:36.600 ****** 2026-01-03 00:42:10.991187 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991195 | orchestrator | 2026-01-03 00:42:10.991203 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:42:10.991211 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.142) 0:00:36.742 ****** 2026-01-03 00:42:10.991219 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991227 | orchestrator | 2026-01-03 00:42:10.991235 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:42:10.991243 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.108) 0:00:36.851 ****** 2026-01-03 00:42:10.991252 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991260 | orchestrator | 2026-01-03 00:42:10.991268 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:42:10.991281 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.114) 0:00:36.965 ****** 2026-01-03 00:42:10.991288 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:42:10.991296 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:10.991304 | orchestrator |  "sdb": { 2026-01-03 00:42:10.991330 | orchestrator |  "osd_lvm_uuid": "c0772612-0fc2-543a-b7cc-c9fc1cdd665f" 2026-01-03 00:42:10.991339 | orchestrator |  }, 2026-01-03 00:42:10.991347 | orchestrator |  "sdc": { 2026-01-03 00:42:10.991355 | orchestrator |  "osd_lvm_uuid": "45670551-be8c-5463-bb13-3841732d7282" 2026-01-03 00:42:10.991363 | orchestrator |  } 2026-01-03 00:42:10.991369 | orchestrator |  } 2026-01-03 00:42:10.991374 | orchestrator | } 2026-01-03 00:42:10.991380 | orchestrator | 2026-01-03 00:42:10.991385 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:42:10.991391 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.135) 0:00:37.100 ****** 2026-01-03 00:42:10.991396 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991401 | orchestrator | 2026-01-03 00:42:10.991407 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:42:10.991413 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.105) 0:00:37.206 ****** 2026-01-03 00:42:10.991428 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991436 | orchestrator | 2026-01-03 00:42:10.991444 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:42:10.991451 | orchestrator | Saturday 03 January 2026 00:42:09 +0000 (0:00:00.241) 0:00:37.448 ****** 2026-01-03 00:42:10.991458 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:10.991465 | orchestrator | 2026-01-03 00:42:10.991473 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:42:10.991481 | orchestrator | Saturday 03 January 2026 00:42:10 +0000 (0:00:00.118) 0:00:37.566 ****** 2026-01-03 00:42:10.991490 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 00:42:10.991498 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:42:10.991505 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:10.991513 | orchestrator |  "sdb": { 2026-01-03 00:42:10.991521 | orchestrator |  "osd_lvm_uuid": "c0772612-0fc2-543a-b7cc-c9fc1cdd665f" 2026-01-03 00:42:10.991528 | orchestrator |  }, 2026-01-03 00:42:10.991536 | orchestrator |  "sdc": { 2026-01-03 00:42:10.991543 | orchestrator |  "osd_lvm_uuid": "45670551-be8c-5463-bb13-3841732d7282" 2026-01-03 00:42:10.991551 | orchestrator |  } 2026-01-03 00:42:10.991559 | orchestrator |  }, 2026-01-03 00:42:10.991567 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:42:10.991572 | orchestrator |  { 2026-01-03 00:42:10.991576 | orchestrator |  "data": "osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f", 2026-01-03 00:42:10.991581 | orchestrator |  "data_vg": "ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f" 2026-01-03 00:42:10.991586 | orchestrator |  }, 2026-01-03 00:42:10.991590 | orchestrator |  { 2026-01-03 00:42:10.991595 | orchestrator |  "data": "osd-block-45670551-be8c-5463-bb13-3841732d7282", 2026-01-03 00:42:10.991599 | orchestrator |  "data_vg": "ceph-45670551-be8c-5463-bb13-3841732d7282" 2026-01-03 00:42:10.991604 | orchestrator |  } 2026-01-03 00:42:10.991611 | orchestrator |  ] 2026-01-03 00:42:10.991616 | orchestrator |  } 2026-01-03 00:42:10.991620 | orchestrator | } 2026-01-03 00:42:10.991625 | orchestrator | 2026-01-03 00:42:10.991629 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:42:10.991634 | orchestrator | Saturday 03 January 2026 00:42:10 +0000 (0:00:00.145) 0:00:37.712 ****** 2026-01-03 00:42:10.991638 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:10.991643 | orchestrator | 2026-01-03 00:42:10.991647 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:42:10.991652 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:10.991658 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:10.991663 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:10.991668 | orchestrator | 2026-01-03 00:42:10.991672 | orchestrator | 2026-01-03 00:42:10.991677 | orchestrator | 2026-01-03 00:42:10.991681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:42:10.991685 | orchestrator | Saturday 03 January 2026 00:42:10 +0000 (0:00:00.798) 0:00:38.510 ****** 2026-01-03 00:42:10.991690 | orchestrator | =============================================================================== 2026-01-03 00:42:10.991694 | orchestrator | Write configuration file ------------------------------------------------ 3.52s 2026-01-03 00:42:10.991699 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-01-03 00:42:10.991703 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-01-03 00:42:10.991708 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.08s 2026-01-03 00:42:10.991829 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-01-03 00:42:10.991836 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-01-03 00:42:10.991841 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-01-03 00:42:10.991845 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-01-03 00:42:10.991849 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-03 00:42:10.991854 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-03 00:42:10.991858 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-01-03 00:42:10.991863 | orchestrator | Print configuration data ------------------------------------------------ 0.65s 2026-01-03 00:42:10.991867 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-03 00:42:10.991880 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2026-01-03 00:42:11.210190 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-01-03 00:42:11.210279 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-01-03 00:42:11.210290 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-01-03 00:42:11.210297 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.52s 2026-01-03 00:42:11.210303 | orchestrator | Set DB devices config data ---------------------------------------------- 0.50s 2026-01-03 00:42:11.210310 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-01-03 00:42:33.756886 | orchestrator | 2026-01-03 00:42:33 | INFO  | Task c0d264a1-b064-4154-a09d-e2eb2f45f407 (sync inventory) is running in background. Output coming soon. 2026-01-03 00:42:58.605215 | orchestrator | 2026-01-03 00:42:35 | INFO  | Starting group_vars file reorganization 2026-01-03 00:42:58.605313 | orchestrator | 2026-01-03 00:42:35 | INFO  | Moved 0 file(s) to their respective directories 2026-01-03 00:42:58.605325 | orchestrator | 2026-01-03 00:42:35 | INFO  | Group_vars file reorganization completed 2026-01-03 00:42:58.605332 | orchestrator | 2026-01-03 00:42:37 | INFO  | Starting variable preparation from inventory 2026-01-03 00:42:58.605338 | orchestrator | 2026-01-03 00:42:40 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-03 00:42:58.605345 | orchestrator | 2026-01-03 00:42:40 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-03 00:42:58.605369 | orchestrator | 2026-01-03 00:42:40 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-03 00:42:58.605377 | orchestrator | 2026-01-03 00:42:40 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-03 00:42:58.605385 | orchestrator | 2026-01-03 00:42:40 | INFO  | Variable preparation completed 2026-01-03 00:42:58.605391 | orchestrator | 2026-01-03 00:42:41 | INFO  | Starting inventory overwrite handling 2026-01-03 00:42:58.605401 | orchestrator | 2026-01-03 00:42:41 | INFO  | Handling group overwrites in 99-overwrite 2026-01-03 00:42:58.605408 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removing group frr:children from 60-generic 2026-01-03 00:42:58.605439 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-03 00:42:58.605447 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-03 00:42:58.605453 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-03 00:42:58.605460 | orchestrator | 2026-01-03 00:42:41 | INFO  | Handling group overwrites in 20-roles 2026-01-03 00:42:58.605490 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-03 00:42:58.605498 | orchestrator | 2026-01-03 00:42:41 | INFO  | Removed 5 group(s) in total 2026-01-03 00:42:58.605504 | orchestrator | 2026-01-03 00:42:41 | INFO  | Inventory overwrite handling completed 2026-01-03 00:42:58.605511 | orchestrator | 2026-01-03 00:42:43 | INFO  | Starting merge of inventory files 2026-01-03 00:42:58.605518 | orchestrator | 2026-01-03 00:42:43 | INFO  | Inventory files merged successfully 2026-01-03 00:42:58.605524 | orchestrator | 2026-01-03 00:42:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-03 00:42:58.605531 | orchestrator | 2026-01-03 00:42:57 | INFO  | Successfully wrote ClusterShell configuration 2026-01-03 00:42:58.605538 | orchestrator | [master 7c8a146] 2026-01-03-00-42 2026-01-03 00:42:58.605546 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-03 00:43:00.910363 | orchestrator | 2026-01-03 00:43:00 | INFO  | Task a7509463-f367-4e63-9d77-f8a7129be4fc (ceph-create-lvm-devices) was prepared for execution. 2026-01-03 00:43:00.910442 | orchestrator | 2026-01-03 00:43:00 | INFO  | It takes a moment until task a7509463-f367-4e63-9d77-f8a7129be4fc (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-03 00:43:10.878558 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:43:10.878655 | orchestrator | 2.16.14 2026-01-03 00:43:10.878702 | orchestrator | 2026-01-03 00:43:10.878712 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:43:10.878723 | orchestrator | 2026-01-03 00:43:10.878729 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:43:10.878734 | orchestrator | Saturday 03 January 2026 00:43:04 +0000 (0:00:00.222) 0:00:00.222 ****** 2026-01-03 00:43:10.878739 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:43:10.878744 | orchestrator | 2026-01-03 00:43:10.878749 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:43:10.878755 | orchestrator | Saturday 03 January 2026 00:43:04 +0000 (0:00:00.217) 0:00:00.440 ****** 2026-01-03 00:43:10.878759 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:10.878764 | orchestrator | 2026-01-03 00:43:10.878770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878775 | orchestrator | Saturday 03 January 2026 00:43:04 +0000 (0:00:00.202) 0:00:00.642 ****** 2026-01-03 00:43:10.878781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:43:10.878785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:43:10.878790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:43:10.878795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:43:10.878799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:43:10.878804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:43:10.878809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:43:10.878814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:43:10.878819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-03 00:43:10.878823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:43:10.878830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:43:10.878837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:43:10.878862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:43:10.878867 | orchestrator | 2026-01-03 00:43:10.878872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878877 | orchestrator | Saturday 03 January 2026 00:43:05 +0000 (0:00:00.427) 0:00:01.069 ****** 2026-01-03 00:43:10.878882 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878887 | orchestrator | 2026-01-03 00:43:10.878891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878896 | orchestrator | Saturday 03 January 2026 00:43:05 +0000 (0:00:00.196) 0:00:01.265 ****** 2026-01-03 00:43:10.878901 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878905 | orchestrator | 2026-01-03 00:43:10.878910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878915 | orchestrator | Saturday 03 January 2026 00:43:05 +0000 (0:00:00.167) 0:00:01.433 ****** 2026-01-03 00:43:10.878920 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878925 | orchestrator | 2026-01-03 00:43:10.878929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878934 | orchestrator | Saturday 03 January 2026 00:43:05 +0000 (0:00:00.191) 0:00:01.625 ****** 2026-01-03 00:43:10.878939 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878943 | orchestrator | 2026-01-03 00:43:10.878948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878953 | orchestrator | Saturday 03 January 2026 00:43:05 +0000 (0:00:00.182) 0:00:01.807 ****** 2026-01-03 00:43:10.878958 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878962 | orchestrator | 2026-01-03 00:43:10.878967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878972 | orchestrator | Saturday 03 January 2026 00:43:06 +0000 (0:00:00.181) 0:00:01.989 ****** 2026-01-03 00:43:10.878976 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.878981 | orchestrator | 2026-01-03 00:43:10.878986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.878991 | orchestrator | Saturday 03 January 2026 00:43:06 +0000 (0:00:00.209) 0:00:02.198 ****** 2026-01-03 00:43:10.878995 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879000 | orchestrator | 2026-01-03 00:43:10.879005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879009 | orchestrator | Saturday 03 January 2026 00:43:06 +0000 (0:00:00.185) 0:00:02.384 ****** 2026-01-03 00:43:10.879014 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879019 | orchestrator | 2026-01-03 00:43:10.879023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879029 | orchestrator | Saturday 03 January 2026 00:43:06 +0000 (0:00:00.194) 0:00:02.578 ****** 2026-01-03 00:43:10.879036 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba) 2026-01-03 00:43:10.879045 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba) 2026-01-03 00:43:10.879053 | orchestrator | 2026-01-03 00:43:10.879061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879082 | orchestrator | Saturday 03 January 2026 00:43:07 +0000 (0:00:00.372) 0:00:02.950 ****** 2026-01-03 00:43:10.879089 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338) 2026-01-03 00:43:10.879097 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338) 2026-01-03 00:43:10.879104 | orchestrator | 2026-01-03 00:43:10.879112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879120 | orchestrator | Saturday 03 January 2026 00:43:07 +0000 (0:00:00.503) 0:00:03.454 ****** 2026-01-03 00:43:10.879128 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743) 2026-01-03 00:43:10.879147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743) 2026-01-03 00:43:10.879156 | orchestrator | 2026-01-03 00:43:10.879163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879171 | orchestrator | Saturday 03 January 2026 00:43:08 +0000 (0:00:00.509) 0:00:03.963 ****** 2026-01-03 00:43:10.879178 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25) 2026-01-03 00:43:10.879186 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25) 2026-01-03 00:43:10.879193 | orchestrator | 2026-01-03 00:43:10.879201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:10.879209 | orchestrator | Saturday 03 January 2026 00:43:08 +0000 (0:00:00.661) 0:00:04.625 ****** 2026-01-03 00:43:10.879218 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:43:10.879227 | orchestrator | 2026-01-03 00:43:10.879235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879243 | orchestrator | Saturday 03 January 2026 00:43:09 +0000 (0:00:00.296) 0:00:04.921 ****** 2026-01-03 00:43:10.879251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:43:10.879259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:43:10.879267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:43:10.879289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:43:10.879298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:43:10.879305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:43:10.879313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:43:10.879321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:43:10.879328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-03 00:43:10.879336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:43:10.879345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:43:10.879356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:43:10.879364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:43:10.879372 | orchestrator | 2026-01-03 00:43:10.879380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879389 | orchestrator | Saturday 03 January 2026 00:43:09 +0000 (0:00:00.380) 0:00:05.302 ****** 2026-01-03 00:43:10.879397 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879406 | orchestrator | 2026-01-03 00:43:10.879414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879423 | orchestrator | Saturday 03 January 2026 00:43:09 +0000 (0:00:00.235) 0:00:05.537 ****** 2026-01-03 00:43:10.879431 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879440 | orchestrator | 2026-01-03 00:43:10.879448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879457 | orchestrator | Saturday 03 January 2026 00:43:09 +0000 (0:00:00.223) 0:00:05.761 ****** 2026-01-03 00:43:10.879465 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879473 | orchestrator | 2026-01-03 00:43:10.879481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879489 | orchestrator | Saturday 03 January 2026 00:43:10 +0000 (0:00:00.225) 0:00:05.987 ****** 2026-01-03 00:43:10.879502 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879510 | orchestrator | 2026-01-03 00:43:10.879518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879526 | orchestrator | Saturday 03 January 2026 00:43:10 +0000 (0:00:00.207) 0:00:06.194 ****** 2026-01-03 00:43:10.879534 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879543 | orchestrator | 2026-01-03 00:43:10.879551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879559 | orchestrator | Saturday 03 January 2026 00:43:10 +0000 (0:00:00.170) 0:00:06.365 ****** 2026-01-03 00:43:10.879566 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879574 | orchestrator | 2026-01-03 00:43:10.879582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:10.879589 | orchestrator | Saturday 03 January 2026 00:43:10 +0000 (0:00:00.175) 0:00:06.541 ****** 2026-01-03 00:43:10.879596 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:10.879604 | orchestrator | 2026-01-03 00:43:10.879617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105396 | orchestrator | Saturday 03 January 2026 00:43:10 +0000 (0:00:00.164) 0:00:06.705 ****** 2026-01-03 00:43:18.105523 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105546 | orchestrator | 2026-01-03 00:43:18.105562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105572 | orchestrator | Saturday 03 January 2026 00:43:11 +0000 (0:00:00.166) 0:00:06.872 ****** 2026-01-03 00:43:18.105581 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-03 00:43:18.105591 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-03 00:43:18.105600 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-03 00:43:18.105609 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-03 00:43:18.105618 | orchestrator | 2026-01-03 00:43:18.105627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105636 | orchestrator | Saturday 03 January 2026 00:43:11 +0000 (0:00:00.823) 0:00:07.695 ****** 2026-01-03 00:43:18.105645 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105653 | orchestrator | 2026-01-03 00:43:18.105716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105726 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.144) 0:00:07.840 ****** 2026-01-03 00:43:18.105738 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105752 | orchestrator | 2026-01-03 00:43:18.105767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105783 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.174) 0:00:08.014 ****** 2026-01-03 00:43:18.105797 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105807 | orchestrator | 2026-01-03 00:43:18.105816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:18.105824 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.178) 0:00:08.192 ****** 2026-01-03 00:43:18.105833 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105842 | orchestrator | 2026-01-03 00:43:18.105851 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:43:18.105859 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.173) 0:00:08.366 ****** 2026-01-03 00:43:18.105868 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.105877 | orchestrator | 2026-01-03 00:43:18.105885 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:43:18.105894 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.106) 0:00:08.472 ****** 2026-01-03 00:43:18.105904 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c38584cd-f033-5ed2-9691-83456ad614b7'}}) 2026-01-03 00:43:18.105913 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}}) 2026-01-03 00:43:18.105922 | orchestrator | 2026-01-03 00:43:18.105931 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:43:18.105961 | orchestrator | Saturday 03 January 2026 00:43:12 +0000 (0:00:00.163) 0:00:08.636 ****** 2026-01-03 00:43:18.105972 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'}) 2026-01-03 00:43:18.105981 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}) 2026-01-03 00:43:18.105990 | orchestrator | 2026-01-03 00:43:18.105999 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:43:18.106008 | orchestrator | Saturday 03 January 2026 00:43:14 +0000 (0:00:02.002) 0:00:10.639 ****** 2026-01-03 00:43:18.106076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106096 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106105 | orchestrator | 2026-01-03 00:43:18.106113 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:43:18.106122 | orchestrator | Saturday 03 January 2026 00:43:14 +0000 (0:00:00.120) 0:00:10.759 ****** 2026-01-03 00:43:18.106131 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'}) 2026-01-03 00:43:18.106140 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}) 2026-01-03 00:43:18.106148 | orchestrator | 2026-01-03 00:43:18.106157 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:43:18.106166 | orchestrator | Saturday 03 January 2026 00:43:16 +0000 (0:00:01.390) 0:00:12.149 ****** 2026-01-03 00:43:18.106175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106192 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106201 | orchestrator | 2026-01-03 00:43:18.106210 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:43:18.106218 | orchestrator | Saturday 03 January 2026 00:43:16 +0000 (0:00:00.144) 0:00:12.294 ****** 2026-01-03 00:43:18.106244 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106253 | orchestrator | 2026-01-03 00:43:18.106262 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:43:18.106271 | orchestrator | Saturday 03 January 2026 00:43:16 +0000 (0:00:00.123) 0:00:12.417 ****** 2026-01-03 00:43:18.106279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106297 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106305 | orchestrator | 2026-01-03 00:43:18.106314 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:43:18.106322 | orchestrator | Saturday 03 January 2026 00:43:16 +0000 (0:00:00.238) 0:00:12.656 ****** 2026-01-03 00:43:18.106331 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106340 | orchestrator | 2026-01-03 00:43:18.106348 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:43:18.106357 | orchestrator | Saturday 03 January 2026 00:43:16 +0000 (0:00:00.127) 0:00:12.783 ****** 2026-01-03 00:43:18.106373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106432 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106441 | orchestrator | 2026-01-03 00:43:18.106450 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:43:18.106458 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.136) 0:00:12.920 ****** 2026-01-03 00:43:18.106467 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106475 | orchestrator | 2026-01-03 00:43:18.106484 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:43:18.106493 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.126) 0:00:13.047 ****** 2026-01-03 00:43:18.106501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106518 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106527 | orchestrator | 2026-01-03 00:43:18.106583 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:43:18.106599 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.134) 0:00:13.182 ****** 2026-01-03 00:43:18.106614 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:18.106629 | orchestrator | 2026-01-03 00:43:18.106643 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:43:18.106741 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.121) 0:00:13.303 ****** 2026-01-03 00:43:18.106758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106776 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106785 | orchestrator | 2026-01-03 00:43:18.106794 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:43:18.106802 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.152) 0:00:13.456 ****** 2026-01-03 00:43:18.106813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106843 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106858 | orchestrator | 2026-01-03 00:43:18.106871 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:43:18.106887 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.170) 0:00:13.626 ****** 2026-01-03 00:43:18.106901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:18.106915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:18.106924 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106932 | orchestrator | 2026-01-03 00:43:18.106941 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:43:18.106958 | orchestrator | Saturday 03 January 2026 00:43:17 +0000 (0:00:00.155) 0:00:13.782 ****** 2026-01-03 00:43:18.106967 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:18.106975 | orchestrator | 2026-01-03 00:43:18.106984 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:43:18.107002 | orchestrator | Saturday 03 January 2026 00:43:18 +0000 (0:00:00.149) 0:00:13.932 ****** 2026-01-03 00:43:24.882518 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.882609 | orchestrator | 2026-01-03 00:43:24.882622 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:43:24.882630 | orchestrator | Saturday 03 January 2026 00:43:18 +0000 (0:00:00.156) 0:00:14.088 ****** 2026-01-03 00:43:24.882637 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.882644 | orchestrator | 2026-01-03 00:43:24.882649 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:43:24.882692 | orchestrator | Saturday 03 January 2026 00:43:18 +0000 (0:00:00.121) 0:00:14.210 ****** 2026-01-03 00:43:24.882701 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:43:24.882708 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:43:24.882714 | orchestrator | } 2026-01-03 00:43:24.882718 | orchestrator | 2026-01-03 00:43:24.882722 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:43:24.882726 | orchestrator | Saturday 03 January 2026 00:43:18 +0000 (0:00:00.332) 0:00:14.542 ****** 2026-01-03 00:43:24.882730 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:43:24.882734 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:43:24.882738 | orchestrator | } 2026-01-03 00:43:24.882741 | orchestrator | 2026-01-03 00:43:24.882745 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:43:24.882749 | orchestrator | Saturday 03 January 2026 00:43:18 +0000 (0:00:00.141) 0:00:14.684 ****** 2026-01-03 00:43:24.882754 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:43:24.882760 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:43:24.882766 | orchestrator | } 2026-01-03 00:43:24.882773 | orchestrator | 2026-01-03 00:43:24.882779 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:43:24.882783 | orchestrator | Saturday 03 January 2026 00:43:19 +0000 (0:00:00.153) 0:00:14.837 ****** 2026-01-03 00:43:24.882787 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:24.882790 | orchestrator | 2026-01-03 00:43:24.882794 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:43:24.882798 | orchestrator | Saturday 03 January 2026 00:43:19 +0000 (0:00:00.671) 0:00:15.508 ****** 2026-01-03 00:43:24.882802 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:24.882805 | orchestrator | 2026-01-03 00:43:24.882809 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:43:24.882813 | orchestrator | Saturday 03 January 2026 00:43:20 +0000 (0:00:00.656) 0:00:16.165 ****** 2026-01-03 00:43:24.882816 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:24.882820 | orchestrator | 2026-01-03 00:43:24.882824 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:43:24.882828 | orchestrator | Saturday 03 January 2026 00:43:20 +0000 (0:00:00.524) 0:00:16.690 ****** 2026-01-03 00:43:24.882831 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:24.882835 | orchestrator | 2026-01-03 00:43:24.882839 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:43:24.882843 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.148) 0:00:16.838 ****** 2026-01-03 00:43:24.882846 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.882850 | orchestrator | 2026-01-03 00:43:24.882854 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:43:24.882858 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.117) 0:00:16.955 ****** 2026-01-03 00:43:24.882861 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.882865 | orchestrator | 2026-01-03 00:43:24.882869 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:43:24.882898 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.111) 0:00:17.067 ****** 2026-01-03 00:43:24.882903 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:43:24.882909 | orchestrator |  "vgs_report": { 2026-01-03 00:43:24.882940 | orchestrator |  "vg": [] 2026-01-03 00:43:24.882947 | orchestrator |  } 2026-01-03 00:43:24.882954 | orchestrator | } 2026-01-03 00:43:24.882958 | orchestrator | 2026-01-03 00:43:24.882961 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:43:24.882965 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.144) 0:00:17.212 ****** 2026-01-03 00:43:24.882969 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.882975 | orchestrator | 2026-01-03 00:43:24.882980 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:43:24.882986 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.162) 0:00:17.374 ****** 2026-01-03 00:43:24.882993 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883000 | orchestrator | 2026-01-03 00:43:24.883004 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:43:24.883008 | orchestrator | Saturday 03 January 2026 00:43:21 +0000 (0:00:00.155) 0:00:17.530 ****** 2026-01-03 00:43:24.883012 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883016 | orchestrator | 2026-01-03 00:43:24.883019 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:43:24.883023 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.346) 0:00:17.876 ****** 2026-01-03 00:43:24.883027 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883030 | orchestrator | 2026-01-03 00:43:24.883035 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:43:24.883038 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.150) 0:00:18.027 ****** 2026-01-03 00:43:24.883042 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883046 | orchestrator | 2026-01-03 00:43:24.883049 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:43:24.883053 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.173) 0:00:18.201 ****** 2026-01-03 00:43:24.883057 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883060 | orchestrator | 2026-01-03 00:43:24.883064 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:43:24.883068 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.133) 0:00:18.334 ****** 2026-01-03 00:43:24.883127 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883132 | orchestrator | 2026-01-03 00:43:24.883136 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:43:24.883140 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.162) 0:00:18.497 ****** 2026-01-03 00:43:24.883172 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883179 | orchestrator | 2026-01-03 00:43:24.883185 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:43:24.883190 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.148) 0:00:18.646 ****** 2026-01-03 00:43:24.883194 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883198 | orchestrator | 2026-01-03 00:43:24.883203 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:43:24.883207 | orchestrator | Saturday 03 January 2026 00:43:22 +0000 (0:00:00.136) 0:00:18.782 ****** 2026-01-03 00:43:24.883212 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883216 | orchestrator | 2026-01-03 00:43:24.883221 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:43:24.883225 | orchestrator | Saturday 03 January 2026 00:43:23 +0000 (0:00:00.141) 0:00:18.924 ****** 2026-01-03 00:43:24.883230 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883234 | orchestrator | 2026-01-03 00:43:24.883239 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:43:24.883243 | orchestrator | Saturday 03 January 2026 00:43:23 +0000 (0:00:00.145) 0:00:19.069 ****** 2026-01-03 00:43:24.883257 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883310 | orchestrator | 2026-01-03 00:43:24.883315 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:43:24.883320 | orchestrator | Saturday 03 January 2026 00:43:23 +0000 (0:00:00.156) 0:00:19.226 ****** 2026-01-03 00:43:24.883324 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883329 | orchestrator | 2026-01-03 00:43:24.883333 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:43:24.883345 | orchestrator | Saturday 03 January 2026 00:43:23 +0000 (0:00:00.136) 0:00:19.362 ****** 2026-01-03 00:43:24.883350 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883354 | orchestrator | 2026-01-03 00:43:24.883358 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:43:24.883363 | orchestrator | Saturday 03 January 2026 00:43:23 +0000 (0:00:00.137) 0:00:19.500 ****** 2026-01-03 00:43:24.883368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:24.883374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:24.883379 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883386 | orchestrator | 2026-01-03 00:43:24.883392 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:43:24.883399 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.396) 0:00:19.897 ****** 2026-01-03 00:43:24.883407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:24.883415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:24.883421 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883428 | orchestrator | 2026-01-03 00:43:24.883435 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:43:24.883442 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.168) 0:00:20.065 ****** 2026-01-03 00:43:24.883447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:24.883451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:24.883454 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883459 | orchestrator | 2026-01-03 00:43:24.883465 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:43:24.883472 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.157) 0:00:20.223 ****** 2026-01-03 00:43:24.883478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:24.883485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:24.883491 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883495 | orchestrator | 2026-01-03 00:43:24.883498 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:43:24.883502 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.169) 0:00:20.392 ****** 2026-01-03 00:43:24.883506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:24.883510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:24.883518 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:24.883522 | orchestrator | 2026-01-03 00:43:24.883525 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:43:24.883589 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.159) 0:00:20.552 ****** 2026-01-03 00:43:24.883598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.351452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352321 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352348 | orchestrator | 2026-01-03 00:43:30.352358 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:43:30.352367 | orchestrator | Saturday 03 January 2026 00:43:24 +0000 (0:00:00.157) 0:00:20.710 ****** 2026-01-03 00:43:30.352375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.352383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352390 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352398 | orchestrator | 2026-01-03 00:43:30.352405 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:43:30.352413 | orchestrator | Saturday 03 January 2026 00:43:25 +0000 (0:00:00.151) 0:00:20.862 ****** 2026-01-03 00:43:30.352422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.352429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352436 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352444 | orchestrator | 2026-01-03 00:43:30.352451 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:43:30.352458 | orchestrator | Saturday 03 January 2026 00:43:25 +0000 (0:00:00.134) 0:00:20.996 ****** 2026-01-03 00:43:30.352465 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:30.352473 | orchestrator | 2026-01-03 00:43:30.352481 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:43:30.352488 | orchestrator | Saturday 03 January 2026 00:43:25 +0000 (0:00:00.513) 0:00:21.510 ****** 2026-01-03 00:43:30.352496 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:30.352503 | orchestrator | 2026-01-03 00:43:30.352510 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:43:30.352517 | orchestrator | Saturday 03 January 2026 00:43:26 +0000 (0:00:00.535) 0:00:22.046 ****** 2026-01-03 00:43:30.352524 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:43:30.352531 | orchestrator | 2026-01-03 00:43:30.352538 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:43:30.352546 | orchestrator | Saturday 03 January 2026 00:43:26 +0000 (0:00:00.153) 0:00:22.199 ****** 2026-01-03 00:43:30.352553 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'vg_name': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'}) 2026-01-03 00:43:30.352577 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'vg_name': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}) 2026-01-03 00:43:30.352585 | orchestrator | 2026-01-03 00:43:30.352592 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:43:30.352599 | orchestrator | Saturday 03 January 2026 00:43:26 +0000 (0:00:00.197) 0:00:22.396 ****** 2026-01-03 00:43:30.352621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.352629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352636 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352644 | orchestrator | 2026-01-03 00:43:30.352691 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:43:30.352700 | orchestrator | Saturday 03 January 2026 00:43:26 +0000 (0:00:00.422) 0:00:22.818 ****** 2026-01-03 00:43:30.352707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.352714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352721 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352729 | orchestrator | 2026-01-03 00:43:30.352736 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:43:30.352744 | orchestrator | Saturday 03 January 2026 00:43:27 +0000 (0:00:00.200) 0:00:23.019 ****** 2026-01-03 00:43:30.352750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'})  2026-01-03 00:43:30.352757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'})  2026-01-03 00:43:30.352763 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:43:30.352770 | orchestrator | 2026-01-03 00:43:30.352777 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:43:30.352784 | orchestrator | Saturday 03 January 2026 00:43:27 +0000 (0:00:00.172) 0:00:23.192 ****** 2026-01-03 00:43:30.352808 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:43:30.352816 | orchestrator |  "lvm_report": { 2026-01-03 00:43:30.352823 | orchestrator |  "lv": [ 2026-01-03 00:43:30.352831 | orchestrator |  { 2026-01-03 00:43:30.352839 | orchestrator |  "lv_name": "osd-block-c38584cd-f033-5ed2-9691-83456ad614b7", 2026-01-03 00:43:30.352847 | orchestrator |  "vg_name": "ceph-c38584cd-f033-5ed2-9691-83456ad614b7" 2026-01-03 00:43:30.352854 | orchestrator |  }, 2026-01-03 00:43:30.352862 | orchestrator |  { 2026-01-03 00:43:30.352869 | orchestrator |  "lv_name": "osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898", 2026-01-03 00:43:30.352876 | orchestrator |  "vg_name": "ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898" 2026-01-03 00:43:30.352883 | orchestrator |  } 2026-01-03 00:43:30.352891 | orchestrator |  ], 2026-01-03 00:43:30.352898 | orchestrator |  "pv": [ 2026-01-03 00:43:30.352905 | orchestrator |  { 2026-01-03 00:43:30.352912 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:43:30.352917 | orchestrator |  "vg_name": "ceph-c38584cd-f033-5ed2-9691-83456ad614b7" 2026-01-03 00:43:30.352921 | orchestrator |  }, 2026-01-03 00:43:30.352925 | orchestrator |  { 2026-01-03 00:43:30.352930 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:43:30.352934 | orchestrator |  "vg_name": "ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898" 2026-01-03 00:43:30.352938 | orchestrator |  } 2026-01-03 00:43:30.352943 | orchestrator |  ] 2026-01-03 00:43:30.352947 | orchestrator |  } 2026-01-03 00:43:30.352952 | orchestrator | } 2026-01-03 00:43:30.352956 | orchestrator | 2026-01-03 00:43:30.352961 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:43:30.352965 | orchestrator | 2026-01-03 00:43:30.352970 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:43:30.352980 | orchestrator | Saturday 03 January 2026 00:43:27 +0000 (0:00:00.279) 0:00:23.471 ****** 2026-01-03 00:43:30.352984 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:43:30.352988 | orchestrator | 2026-01-03 00:43:30.352993 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:43:30.352997 | orchestrator | Saturday 03 January 2026 00:43:27 +0000 (0:00:00.252) 0:00:23.724 ****** 2026-01-03 00:43:30.353002 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:30.353006 | orchestrator | 2026-01-03 00:43:30.353010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353015 | orchestrator | Saturday 03 January 2026 00:43:28 +0000 (0:00:00.221) 0:00:23.945 ****** 2026-01-03 00:43:30.353019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:43:30.353024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:43:30.353028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:43:30.353033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:43:30.353037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:43:30.353041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:43:30.353050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:43:30.353054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:43:30.353059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-03 00:43:30.353063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:43:30.353068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:43:30.353072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:43:30.353076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:43:30.353080 | orchestrator | 2026-01-03 00:43:30.353085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353089 | orchestrator | Saturday 03 January 2026 00:43:28 +0000 (0:00:00.410) 0:00:24.356 ****** 2026-01-03 00:43:30.353094 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353098 | orchestrator | 2026-01-03 00:43:30.353102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353107 | orchestrator | Saturday 03 January 2026 00:43:28 +0000 (0:00:00.232) 0:00:24.588 ****** 2026-01-03 00:43:30.353111 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353115 | orchestrator | 2026-01-03 00:43:30.353120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353124 | orchestrator | Saturday 03 January 2026 00:43:28 +0000 (0:00:00.227) 0:00:24.816 ****** 2026-01-03 00:43:30.353128 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353133 | orchestrator | 2026-01-03 00:43:30.353137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353141 | orchestrator | Saturday 03 January 2026 00:43:29 +0000 (0:00:00.671) 0:00:25.488 ****** 2026-01-03 00:43:30.353146 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353150 | orchestrator | 2026-01-03 00:43:30.353155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353159 | orchestrator | Saturday 03 January 2026 00:43:29 +0000 (0:00:00.270) 0:00:25.758 ****** 2026-01-03 00:43:30.353163 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353168 | orchestrator | 2026-01-03 00:43:30.353172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:30.353179 | orchestrator | Saturday 03 January 2026 00:43:30 +0000 (0:00:00.209) 0:00:25.967 ****** 2026-01-03 00:43:30.353184 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:30.353188 | orchestrator | 2026-01-03 00:43:30.353196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.380856 | orchestrator | Saturday 03 January 2026 00:43:30 +0000 (0:00:00.210) 0:00:26.178 ****** 2026-01-03 00:43:41.381743 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.381783 | orchestrator | 2026-01-03 00:43:41.381797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.381810 | orchestrator | Saturday 03 January 2026 00:43:30 +0000 (0:00:00.200) 0:00:26.378 ****** 2026-01-03 00:43:41.381821 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.381832 | orchestrator | 2026-01-03 00:43:41.381844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.381856 | orchestrator | Saturday 03 January 2026 00:43:30 +0000 (0:00:00.234) 0:00:26.613 ****** 2026-01-03 00:43:41.381867 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0) 2026-01-03 00:43:41.381880 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0) 2026-01-03 00:43:41.381891 | orchestrator | 2026-01-03 00:43:41.381902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.381913 | orchestrator | Saturday 03 January 2026 00:43:31 +0000 (0:00:00.495) 0:00:27.108 ****** 2026-01-03 00:43:41.381923 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04) 2026-01-03 00:43:41.381935 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04) 2026-01-03 00:43:41.381946 | orchestrator | 2026-01-03 00:43:41.381957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.381968 | orchestrator | Saturday 03 January 2026 00:43:31 +0000 (0:00:00.531) 0:00:27.640 ****** 2026-01-03 00:43:41.381979 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4) 2026-01-03 00:43:41.381990 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4) 2026-01-03 00:43:41.382001 | orchestrator | 2026-01-03 00:43:41.382068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.382082 | orchestrator | Saturday 03 January 2026 00:43:32 +0000 (0:00:00.453) 0:00:28.094 ****** 2026-01-03 00:43:41.382093 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5) 2026-01-03 00:43:41.382104 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5) 2026-01-03 00:43:41.382115 | orchestrator | 2026-01-03 00:43:41.382126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:41.382137 | orchestrator | Saturday 03 January 2026 00:43:32 +0000 (0:00:00.560) 0:00:28.655 ****** 2026-01-03 00:43:41.382148 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:43:41.382160 | orchestrator | 2026-01-03 00:43:41.382171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382182 | orchestrator | Saturday 03 January 2026 00:43:33 +0000 (0:00:00.495) 0:00:29.150 ****** 2026-01-03 00:43:41.382193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:43:41.382205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:43:41.382216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:43:41.382227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:43:41.382238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:43:41.382292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:43:41.382304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:43:41.382315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:43:41.382326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-03 00:43:41.382336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:43:41.382347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:43:41.382358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:43:41.382369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:43:41.382379 | orchestrator | 2026-01-03 00:43:41.382390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382401 | orchestrator | Saturday 03 January 2026 00:43:33 +0000 (0:00:00.534) 0:00:29.685 ****** 2026-01-03 00:43:41.382412 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382423 | orchestrator | 2026-01-03 00:43:41.382434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382445 | orchestrator | Saturday 03 January 2026 00:43:34 +0000 (0:00:00.196) 0:00:29.881 ****** 2026-01-03 00:43:41.382456 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382467 | orchestrator | 2026-01-03 00:43:41.382478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382489 | orchestrator | Saturday 03 January 2026 00:43:34 +0000 (0:00:00.205) 0:00:30.087 ****** 2026-01-03 00:43:41.382500 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382510 | orchestrator | 2026-01-03 00:43:41.382545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382556 | orchestrator | Saturday 03 January 2026 00:43:34 +0000 (0:00:00.193) 0:00:30.281 ****** 2026-01-03 00:43:41.382567 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382578 | orchestrator | 2026-01-03 00:43:41.382589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382599 | orchestrator | Saturday 03 January 2026 00:43:34 +0000 (0:00:00.211) 0:00:30.492 ****** 2026-01-03 00:43:41.382610 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382621 | orchestrator | 2026-01-03 00:43:41.382632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382663 | orchestrator | Saturday 03 January 2026 00:43:34 +0000 (0:00:00.204) 0:00:30.697 ****** 2026-01-03 00:43:41.382675 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382685 | orchestrator | 2026-01-03 00:43:41.382696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382707 | orchestrator | Saturday 03 January 2026 00:43:35 +0000 (0:00:00.202) 0:00:30.899 ****** 2026-01-03 00:43:41.382718 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382729 | orchestrator | 2026-01-03 00:43:41.382740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382751 | orchestrator | Saturday 03 January 2026 00:43:35 +0000 (0:00:00.203) 0:00:31.103 ****** 2026-01-03 00:43:41.382761 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382772 | orchestrator | 2026-01-03 00:43:41.382783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382794 | orchestrator | Saturday 03 January 2026 00:43:35 +0000 (0:00:00.193) 0:00:31.296 ****** 2026-01-03 00:43:41.382805 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-03 00:43:41.382815 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-03 00:43:41.382827 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-03 00:43:41.382838 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-03 00:43:41.382858 | orchestrator | 2026-01-03 00:43:41.382869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382880 | orchestrator | Saturday 03 January 2026 00:43:36 +0000 (0:00:00.828) 0:00:32.125 ****** 2026-01-03 00:43:41.382890 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382901 | orchestrator | 2026-01-03 00:43:41.382912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382923 | orchestrator | Saturday 03 January 2026 00:43:36 +0000 (0:00:00.205) 0:00:32.330 ****** 2026-01-03 00:43:41.382934 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.382945 | orchestrator | 2026-01-03 00:43:41.382955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.382966 | orchestrator | Saturday 03 January 2026 00:43:37 +0000 (0:00:00.648) 0:00:32.978 ****** 2026-01-03 00:43:41.382977 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.383046 | orchestrator | 2026-01-03 00:43:41.383061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:41.383072 | orchestrator | Saturday 03 January 2026 00:43:37 +0000 (0:00:00.200) 0:00:33.179 ****** 2026-01-03 00:43:41.383083 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.383094 | orchestrator | 2026-01-03 00:43:41.383105 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:43:41.383122 | orchestrator | Saturday 03 January 2026 00:43:37 +0000 (0:00:00.198) 0:00:33.377 ****** 2026-01-03 00:43:41.383133 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.383144 | orchestrator | 2026-01-03 00:43:41.383155 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:43:41.383166 | orchestrator | Saturday 03 January 2026 00:43:37 +0000 (0:00:00.137) 0:00:33.515 ****** 2026-01-03 00:43:41.383176 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85e74b82-cd6e-500e-9461-b867f1cfbb6a'}}) 2026-01-03 00:43:41.383188 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}}) 2026-01-03 00:43:41.383199 | orchestrator | 2026-01-03 00:43:41.383209 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:43:41.383220 | orchestrator | Saturday 03 January 2026 00:43:37 +0000 (0:00:00.181) 0:00:33.697 ****** 2026-01-03 00:43:41.383232 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'}) 2026-01-03 00:43:41.383245 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}) 2026-01-03 00:43:41.383255 | orchestrator | 2026-01-03 00:43:41.383266 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:43:41.383277 | orchestrator | Saturday 03 January 2026 00:43:39 +0000 (0:00:01.896) 0:00:35.594 ****** 2026-01-03 00:43:41.383288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:41.383300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:41.383348 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:41.383360 | orchestrator | 2026-01-03 00:43:41.383371 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:43:41.383382 | orchestrator | Saturday 03 January 2026 00:43:39 +0000 (0:00:00.201) 0:00:35.795 ****** 2026-01-03 00:43:41.383393 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'}) 2026-01-03 00:43:41.383412 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}) 2026-01-03 00:43:46.433484 | orchestrator | 2026-01-03 00:43:46.433583 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:43:46.433598 | orchestrator | Saturday 03 January 2026 00:43:41 +0000 (0:00:01.411) 0:00:37.207 ****** 2026-01-03 00:43:46.433610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.433622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.433632 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.433720 | orchestrator | 2026-01-03 00:43:46.433733 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:43:46.433743 | orchestrator | Saturday 03 January 2026 00:43:41 +0000 (0:00:00.140) 0:00:37.347 ****** 2026-01-03 00:43:46.433753 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.433763 | orchestrator | 2026-01-03 00:43:46.433773 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:43:46.433783 | orchestrator | Saturday 03 January 2026 00:43:41 +0000 (0:00:00.124) 0:00:37.471 ****** 2026-01-03 00:43:46.433793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.433802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.433812 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.433822 | orchestrator | 2026-01-03 00:43:46.433832 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:43:46.433841 | orchestrator | Saturday 03 January 2026 00:43:41 +0000 (0:00:00.140) 0:00:37.612 ****** 2026-01-03 00:43:46.433851 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.433860 | orchestrator | 2026-01-03 00:43:46.433870 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:43:46.433880 | orchestrator | Saturday 03 January 2026 00:43:41 +0000 (0:00:00.127) 0:00:37.740 ****** 2026-01-03 00:43:46.433890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.433899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.433909 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.433919 | orchestrator | 2026-01-03 00:43:46.433928 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:43:46.433964 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.272) 0:00:38.013 ****** 2026-01-03 00:43:46.433989 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.434008 | orchestrator | 2026-01-03 00:43:46.434102 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:43:46.434126 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.135) 0:00:38.148 ****** 2026-01-03 00:43:46.434144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.434161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.434177 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.434953 | orchestrator | 2026-01-03 00:43:46.435042 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:43:46.435061 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.136) 0:00:38.285 ****** 2026-01-03 00:43:46.435079 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:46.435126 | orchestrator | 2026-01-03 00:43:46.435145 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:43:46.435161 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.131) 0:00:38.417 ****** 2026-01-03 00:43:46.435177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.435195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.435345 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435362 | orchestrator | 2026-01-03 00:43:46.435411 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:43:46.435428 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.129) 0:00:38.546 ****** 2026-01-03 00:43:46.435444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.435461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.435477 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435494 | orchestrator | 2026-01-03 00:43:46.435511 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:43:46.435553 | orchestrator | Saturday 03 January 2026 00:43:42 +0000 (0:00:00.154) 0:00:38.701 ****** 2026-01-03 00:43:46.435708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:46.435751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:46.435768 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435784 | orchestrator | 2026-01-03 00:43:46.435801 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:43:46.435818 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.146) 0:00:38.848 ****** 2026-01-03 00:43:46.435837 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435847 | orchestrator | 2026-01-03 00:43:46.435857 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:43:46.435867 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.105) 0:00:38.953 ****** 2026-01-03 00:43:46.435876 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435892 | orchestrator | 2026-01-03 00:43:46.435908 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:43:46.435923 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.124) 0:00:39.078 ****** 2026-01-03 00:43:46.435938 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.435955 | orchestrator | 2026-01-03 00:43:46.435971 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:43:46.435988 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.106) 0:00:39.184 ****** 2026-01-03 00:43:46.436003 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:43:46.436019 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:43:46.436036 | orchestrator | } 2026-01-03 00:43:46.436053 | orchestrator | 2026-01-03 00:43:46.436070 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:43:46.436085 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.123) 0:00:39.308 ****** 2026-01-03 00:43:46.436100 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:43:46.436116 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:43:46.436133 | orchestrator | } 2026-01-03 00:43:46.436148 | orchestrator | 2026-01-03 00:43:46.436165 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:43:46.436181 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.137) 0:00:39.446 ****** 2026-01-03 00:43:46.436213 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:43:46.436228 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:43:46.436245 | orchestrator | } 2026-01-03 00:43:46.436261 | orchestrator | 2026-01-03 00:43:46.436277 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:43:46.436293 | orchestrator | Saturday 03 January 2026 00:43:43 +0000 (0:00:00.248) 0:00:39.694 ****** 2026-01-03 00:43:46.436309 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:46.436324 | orchestrator | 2026-01-03 00:43:46.436340 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:43:46.436357 | orchestrator | Saturday 03 January 2026 00:43:44 +0000 (0:00:00.568) 0:00:40.262 ****** 2026-01-03 00:43:46.436372 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:46.436389 | orchestrator | 2026-01-03 00:43:46.436405 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:43:46.436421 | orchestrator | Saturday 03 January 2026 00:43:44 +0000 (0:00:00.503) 0:00:40.766 ****** 2026-01-03 00:43:46.436437 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:46.436454 | orchestrator | 2026-01-03 00:43:46.436471 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:43:46.436488 | orchestrator | Saturday 03 January 2026 00:43:45 +0000 (0:00:00.535) 0:00:41.301 ****** 2026-01-03 00:43:46.436524 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:46.436542 | orchestrator | 2026-01-03 00:43:46.436557 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:43:46.436573 | orchestrator | Saturday 03 January 2026 00:43:45 +0000 (0:00:00.141) 0:00:41.442 ****** 2026-01-03 00:43:46.436589 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.436606 | orchestrator | 2026-01-03 00:43:46.436692 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:43:46.436714 | orchestrator | Saturday 03 January 2026 00:43:45 +0000 (0:00:00.115) 0:00:41.558 ****** 2026-01-03 00:43:46.436731 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.436748 | orchestrator | 2026-01-03 00:43:46.436797 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:43:46.436815 | orchestrator | Saturday 03 January 2026 00:43:45 +0000 (0:00:00.099) 0:00:41.658 ****** 2026-01-03 00:43:46.436832 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:43:46.436851 | orchestrator |  "vgs_report": { 2026-01-03 00:43:46.436868 | orchestrator |  "vg": [] 2026-01-03 00:43:46.436883 | orchestrator |  } 2026-01-03 00:43:46.436901 | orchestrator | } 2026-01-03 00:43:46.437006 | orchestrator | 2026-01-03 00:43:46.437023 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:43:46.437040 | orchestrator | Saturday 03 January 2026 00:43:45 +0000 (0:00:00.132) 0:00:41.790 ****** 2026-01-03 00:43:46.437056 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.437073 | orchestrator | 2026-01-03 00:43:46.437088 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:43:46.437106 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.127) 0:00:41.917 ****** 2026-01-03 00:43:46.437123 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.437140 | orchestrator | 2026-01-03 00:43:46.437158 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:43:46.437175 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.128) 0:00:42.046 ****** 2026-01-03 00:43:46.437192 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.437211 | orchestrator | 2026-01-03 00:43:46.437228 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:43:46.437247 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.102) 0:00:42.148 ****** 2026-01-03 00:43:46.437264 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:46.437280 | orchestrator | 2026-01-03 00:43:46.437312 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:43:50.752574 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.112) 0:00:42.261 ****** 2026-01-03 00:43:50.752732 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752747 | orchestrator | 2026-01-03 00:43:50.752757 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:43:50.752767 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.239) 0:00:42.501 ****** 2026-01-03 00:43:50.752775 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752784 | orchestrator | 2026-01-03 00:43:50.752792 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:43:50.752801 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.107) 0:00:42.608 ****** 2026-01-03 00:43:50.752809 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752818 | orchestrator | 2026-01-03 00:43:50.752826 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:43:50.752834 | orchestrator | Saturday 03 January 2026 00:43:46 +0000 (0:00:00.123) 0:00:42.731 ****** 2026-01-03 00:43:50.752843 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752851 | orchestrator | 2026-01-03 00:43:50.752860 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:43:50.752868 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.129) 0:00:42.861 ****** 2026-01-03 00:43:50.752876 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752885 | orchestrator | 2026-01-03 00:43:50.752893 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:43:50.752902 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.109) 0:00:42.970 ****** 2026-01-03 00:43:50.752910 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752919 | orchestrator | 2026-01-03 00:43:50.752927 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:43:50.752935 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.124) 0:00:43.095 ****** 2026-01-03 00:43:50.752944 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752952 | orchestrator | 2026-01-03 00:43:50.752960 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:43:50.752969 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.125) 0:00:43.221 ****** 2026-01-03 00:43:50.752977 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.752985 | orchestrator | 2026-01-03 00:43:50.752995 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:43:50.753003 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.123) 0:00:43.344 ****** 2026-01-03 00:43:50.753012 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753021 | orchestrator | 2026-01-03 00:43:50.753030 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:43:50.753040 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.126) 0:00:43.470 ****** 2026-01-03 00:43:50.753049 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753058 | orchestrator | 2026-01-03 00:43:50.753068 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:43:50.753090 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.126) 0:00:43.596 ****** 2026-01-03 00:43:50.753099 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753118 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753127 | orchestrator | 2026-01-03 00:43:50.753137 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:43:50.753145 | orchestrator | Saturday 03 January 2026 00:43:47 +0000 (0:00:00.137) 0:00:43.734 ****** 2026-01-03 00:43:50.753154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753176 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753182 | orchestrator | 2026-01-03 00:43:50.753188 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:43:50.753194 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.126) 0:00:43.860 ****** 2026-01-03 00:43:50.753202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753219 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753227 | orchestrator | 2026-01-03 00:43:50.753236 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:43:50.753245 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.136) 0:00:43.997 ****** 2026-01-03 00:43:50.753254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753272 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753280 | orchestrator | 2026-01-03 00:43:50.753307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:43:50.753314 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.287) 0:00:44.284 ****** 2026-01-03 00:43:50.753320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753325 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753330 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753335 | orchestrator | 2026-01-03 00:43:50.753340 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:43:50.753345 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.134) 0:00:44.418 ****** 2026-01-03 00:43:50.753351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753361 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753366 | orchestrator | 2026-01-03 00:43:50.753371 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:43:50.753376 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.124) 0:00:44.543 ****** 2026-01-03 00:43:50.753381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753392 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753397 | orchestrator | 2026-01-03 00:43:50.753402 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:43:50.753407 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.137) 0:00:44.680 ****** 2026-01-03 00:43:50.753416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753431 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753436 | orchestrator | 2026-01-03 00:43:50.753441 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:43:50.753446 | orchestrator | Saturday 03 January 2026 00:43:48 +0000 (0:00:00.146) 0:00:44.827 ****** 2026-01-03 00:43:50.753451 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:50.753456 | orchestrator | 2026-01-03 00:43:50.753461 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:43:50.753466 | orchestrator | Saturday 03 January 2026 00:43:49 +0000 (0:00:00.547) 0:00:45.374 ****** 2026-01-03 00:43:50.753471 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:50.753476 | orchestrator | 2026-01-03 00:43:50.753482 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:43:50.753487 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.560) 0:00:45.935 ****** 2026-01-03 00:43:50.753492 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:43:50.753497 | orchestrator | 2026-01-03 00:43:50.753502 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:43:50.753507 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.140) 0:00:46.076 ****** 2026-01-03 00:43:50.753512 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'vg_name': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}) 2026-01-03 00:43:50.753518 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'vg_name': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'}) 2026-01-03 00:43:50.753523 | orchestrator | 2026-01-03 00:43:50.753528 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:43:50.753533 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.164) 0:00:46.240 ****** 2026-01-03 00:43:50.753538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:50.753548 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:50.753553 | orchestrator | 2026-01-03 00:43:50.753558 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:43:50.753563 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.163) 0:00:46.403 ****** 2026-01-03 00:43:50.753568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:50.753577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:56.980010 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:56.980184 | orchestrator | 2026-01-03 00:43:56.980211 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:43:56.980232 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.176) 0:00:46.580 ****** 2026-01-03 00:43:56.980272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'})  2026-01-03 00:43:56.980309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'})  2026-01-03 00:43:56.980329 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:43:56.980368 | orchestrator | 2026-01-03 00:43:56.980381 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:43:56.980393 | orchestrator | Saturday 03 January 2026 00:43:50 +0000 (0:00:00.157) 0:00:46.738 ****** 2026-01-03 00:43:56.980404 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:43:56.980415 | orchestrator |  "lvm_report": { 2026-01-03 00:43:56.980428 | orchestrator |  "lv": [ 2026-01-03 00:43:56.980439 | orchestrator |  { 2026-01-03 00:43:56.980450 | orchestrator |  "lv_name": "osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e", 2026-01-03 00:43:56.980462 | orchestrator |  "vg_name": "ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e" 2026-01-03 00:43:56.980473 | orchestrator |  }, 2026-01-03 00:43:56.980484 | orchestrator |  { 2026-01-03 00:43:56.980497 | orchestrator |  "lv_name": "osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a", 2026-01-03 00:43:56.980510 | orchestrator |  "vg_name": "ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a" 2026-01-03 00:43:56.980522 | orchestrator |  } 2026-01-03 00:43:56.980535 | orchestrator |  ], 2026-01-03 00:43:56.980548 | orchestrator |  "pv": [ 2026-01-03 00:43:56.980560 | orchestrator |  { 2026-01-03 00:43:56.980573 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:43:56.980587 | orchestrator |  "vg_name": "ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a" 2026-01-03 00:43:56.980600 | orchestrator |  }, 2026-01-03 00:43:56.980612 | orchestrator |  { 2026-01-03 00:43:56.980625 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:43:56.980708 | orchestrator |  "vg_name": "ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e" 2026-01-03 00:43:56.980723 | orchestrator |  } 2026-01-03 00:43:56.980735 | orchestrator |  ] 2026-01-03 00:43:56.980748 | orchestrator |  } 2026-01-03 00:43:56.980762 | orchestrator | } 2026-01-03 00:43:56.980775 | orchestrator | 2026-01-03 00:43:56.980787 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:43:56.980800 | orchestrator | 2026-01-03 00:43:56.980813 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:43:56.980827 | orchestrator | Saturday 03 January 2026 00:43:51 +0000 (0:00:00.486) 0:00:47.224 ****** 2026-01-03 00:43:56.980840 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:43:56.980853 | orchestrator | 2026-01-03 00:43:56.980868 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:43:56.980879 | orchestrator | Saturday 03 January 2026 00:43:51 +0000 (0:00:00.253) 0:00:47.477 ****** 2026-01-03 00:43:56.980890 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:43:56.980901 | orchestrator | 2026-01-03 00:43:56.980912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.980922 | orchestrator | Saturday 03 January 2026 00:43:51 +0000 (0:00:00.265) 0:00:47.743 ****** 2026-01-03 00:43:56.980934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:43:56.980945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:43:56.980956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:43:56.980966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:43:56.980977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:43:56.980988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:43:56.980999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:43:56.981009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:43:56.981020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-03 00:43:56.981040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:43:56.981051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:43:56.981061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:43:56.981072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:43:56.981083 | orchestrator | 2026-01-03 00:43:56.981099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981110 | orchestrator | Saturday 03 January 2026 00:43:52 +0000 (0:00:00.428) 0:00:48.171 ****** 2026-01-03 00:43:56.981121 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981132 | orchestrator | 2026-01-03 00:43:56.981142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981153 | orchestrator | Saturday 03 January 2026 00:43:52 +0000 (0:00:00.202) 0:00:48.373 ****** 2026-01-03 00:43:56.981164 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981175 | orchestrator | 2026-01-03 00:43:56.981186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981219 | orchestrator | Saturday 03 January 2026 00:43:52 +0000 (0:00:00.207) 0:00:48.581 ****** 2026-01-03 00:43:56.981240 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981258 | orchestrator | 2026-01-03 00:43:56.981278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981297 | orchestrator | Saturday 03 January 2026 00:43:52 +0000 (0:00:00.225) 0:00:48.806 ****** 2026-01-03 00:43:56.981316 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981335 | orchestrator | 2026-01-03 00:43:56.981347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981415 | orchestrator | Saturday 03 January 2026 00:43:53 +0000 (0:00:00.248) 0:00:49.055 ****** 2026-01-03 00:43:56.981428 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981438 | orchestrator | 2026-01-03 00:43:56.981449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981459 | orchestrator | Saturday 03 January 2026 00:43:53 +0000 (0:00:00.215) 0:00:49.270 ****** 2026-01-03 00:43:56.981470 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981480 | orchestrator | 2026-01-03 00:43:56.981491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981502 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.604) 0:00:49.874 ****** 2026-01-03 00:43:56.981512 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981523 | orchestrator | 2026-01-03 00:43:56.981534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981544 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.225) 0:00:50.100 ****** 2026-01-03 00:43:56.981555 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:43:56.981565 | orchestrator | 2026-01-03 00:43:56.981576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981587 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.212) 0:00:50.312 ****** 2026-01-03 00:43:56.981598 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b) 2026-01-03 00:43:56.981610 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b) 2026-01-03 00:43:56.981621 | orchestrator | 2026-01-03 00:43:56.981632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981667 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.427) 0:00:50.740 ****** 2026-01-03 00:43:56.981678 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0) 2026-01-03 00:43:56.981689 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0) 2026-01-03 00:43:56.981700 | orchestrator | 2026-01-03 00:43:56.981722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981739 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.428) 0:00:51.168 ****** 2026-01-03 00:43:56.981750 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c) 2026-01-03 00:43:56.981761 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c) 2026-01-03 00:43:56.981772 | orchestrator | 2026-01-03 00:43:56.981782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981794 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.452) 0:00:51.621 ****** 2026-01-03 00:43:56.981804 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd) 2026-01-03 00:43:56.981815 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd) 2026-01-03 00:43:56.981826 | orchestrator | 2026-01-03 00:43:56.981837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:43:56.981848 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.443) 0:00:52.064 ****** 2026-01-03 00:43:56.981858 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:43:56.981869 | orchestrator | 2026-01-03 00:43:56.981880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:43:56.981890 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.310) 0:00:52.375 ****** 2026-01-03 00:43:56.981902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:43:56.981912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:43:56.981923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:43:56.981934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:43:56.981944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:43:56.981955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:43:56.981966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:43:56.981977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:43:56.981988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-03 00:43:56.981998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:43:56.982009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:43:56.982096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:44:06.003679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:44:06.003824 | orchestrator | 2026-01-03 00:44:06.003842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.003855 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.426) 0:00:52.801 ****** 2026-01-03 00:44:06.003866 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.003879 | orchestrator | 2026-01-03 00:44:06.003890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.003901 | orchestrator | Saturday 03 January 2026 00:43:57 +0000 (0:00:00.205) 0:00:53.007 ****** 2026-01-03 00:44:06.003912 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.003923 | orchestrator | 2026-01-03 00:44:06.003935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.003946 | orchestrator | Saturday 03 January 2026 00:43:57 +0000 (0:00:00.730) 0:00:53.737 ****** 2026-01-03 00:44:06.003984 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.003995 | orchestrator | 2026-01-03 00:44:06.004006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004017 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.226) 0:00:53.964 ****** 2026-01-03 00:44:06.004028 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004039 | orchestrator | 2026-01-03 00:44:06.004050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004061 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.225) 0:00:54.189 ****** 2026-01-03 00:44:06.004071 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004082 | orchestrator | 2026-01-03 00:44:06.004093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004104 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.203) 0:00:54.392 ****** 2026-01-03 00:44:06.004114 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004125 | orchestrator | 2026-01-03 00:44:06.004139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004153 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.222) 0:00:54.615 ****** 2026-01-03 00:44:06.004167 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004180 | orchestrator | 2026-01-03 00:44:06.004193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004207 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.188) 0:00:54.804 ****** 2026-01-03 00:44:06.004221 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004234 | orchestrator | 2026-01-03 00:44:06.004247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004261 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.210) 0:00:55.014 ****** 2026-01-03 00:44:06.004293 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-03 00:44:06.004308 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-03 00:44:06.004322 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-03 00:44:06.004335 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-03 00:44:06.004349 | orchestrator | 2026-01-03 00:44:06.004363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004376 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.658) 0:00:55.672 ****** 2026-01-03 00:44:06.004390 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004403 | orchestrator | 2026-01-03 00:44:06.004418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004431 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.204) 0:00:55.877 ****** 2026-01-03 00:44:06.004445 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004458 | orchestrator | 2026-01-03 00:44:06.004472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004486 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.192) 0:00:56.070 ****** 2026-01-03 00:44:06.004497 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004508 | orchestrator | 2026-01-03 00:44:06.004519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:06.004529 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.206) 0:00:56.277 ****** 2026-01-03 00:44:06.004540 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004551 | orchestrator | 2026-01-03 00:44:06.004561 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:44:06.004572 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.209) 0:00:56.487 ****** 2026-01-03 00:44:06.004583 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004593 | orchestrator | 2026-01-03 00:44:06.004604 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:44:06.004615 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.297) 0:00:56.784 ****** 2026-01-03 00:44:06.004626 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}}) 2026-01-03 00:44:06.004744 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45670551-be8c-5463-bb13-3841732d7282'}}) 2026-01-03 00:44:06.004761 | orchestrator | 2026-01-03 00:44:06.004772 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:44:06.004783 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.200) 0:00:56.984 ****** 2026-01-03 00:44:06.004796 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}) 2026-01-03 00:44:06.004808 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'}) 2026-01-03 00:44:06.004819 | orchestrator | 2026-01-03 00:44:06.004830 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:44:06.004863 | orchestrator | Saturday 03 January 2026 00:44:03 +0000 (0:00:01.931) 0:00:58.915 ****** 2026-01-03 00:44:06.004875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:06.004888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:06.004899 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.004910 | orchestrator | 2026-01-03 00:44:06.004922 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:44:06.004933 | orchestrator | Saturday 03 January 2026 00:44:03 +0000 (0:00:00.169) 0:00:59.085 ****** 2026-01-03 00:44:06.004944 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}) 2026-01-03 00:44:06.004955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'}) 2026-01-03 00:44:06.004967 | orchestrator | 2026-01-03 00:44:06.004978 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:44:06.004989 | orchestrator | Saturday 03 January 2026 00:44:04 +0000 (0:00:01.326) 0:01:00.412 ****** 2026-01-03 00:44:06.005000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:06.005011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:06.005022 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005033 | orchestrator | 2026-01-03 00:44:06.005044 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:44:06.005055 | orchestrator | Saturday 03 January 2026 00:44:04 +0000 (0:00:00.176) 0:01:00.588 ****** 2026-01-03 00:44:06.005066 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005077 | orchestrator | 2026-01-03 00:44:06.005088 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:44:06.005099 | orchestrator | Saturday 03 January 2026 00:44:04 +0000 (0:00:00.137) 0:01:00.725 ****** 2026-01-03 00:44:06.005117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:06.005129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:06.005140 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005151 | orchestrator | 2026-01-03 00:44:06.005162 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:44:06.005181 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.140) 0:01:00.866 ****** 2026-01-03 00:44:06.005192 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005203 | orchestrator | 2026-01-03 00:44:06.005214 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:44:06.005225 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.124) 0:01:00.990 ****** 2026-01-03 00:44:06.005236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:06.005247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:06.005258 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005269 | orchestrator | 2026-01-03 00:44:06.005280 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:44:06.005291 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.130) 0:01:01.121 ****** 2026-01-03 00:44:06.005302 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005313 | orchestrator | 2026-01-03 00:44:06.005324 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:44:06.005335 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.126) 0:01:01.247 ****** 2026-01-03 00:44:06.005346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:06.005357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:06.005368 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:06.005379 | orchestrator | 2026-01-03 00:44:06.005390 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:44:06.005401 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.145) 0:01:01.393 ****** 2026-01-03 00:44:06.005412 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:06.005424 | orchestrator | 2026-01-03 00:44:06.005435 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:44:06.005446 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.297) 0:01:01.690 ****** 2026-01-03 00:44:06.005464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:12.040197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:12.040328 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040341 | orchestrator | 2026-01-03 00:44:12.040351 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:44:12.040361 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:00.142) 0:01:01.833 ****** 2026-01-03 00:44:12.040369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:12.040378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:12.040385 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040392 | orchestrator | 2026-01-03 00:44:12.040400 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:44:12.040407 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.153) 0:01:01.986 ****** 2026-01-03 00:44:12.040415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:12.040422 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:12.040474 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040482 | orchestrator | 2026-01-03 00:44:12.040489 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:44:12.040496 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.166) 0:01:02.153 ****** 2026-01-03 00:44:12.040503 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040510 | orchestrator | 2026-01-03 00:44:12.040518 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:44:12.040525 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.141) 0:01:02.295 ****** 2026-01-03 00:44:12.040532 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040539 | orchestrator | 2026-01-03 00:44:12.040546 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:44:12.040553 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.130) 0:01:02.425 ****** 2026-01-03 00:44:12.040560 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040567 | orchestrator | 2026-01-03 00:44:12.040575 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:44:12.040582 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.135) 0:01:02.561 ****** 2026-01-03 00:44:12.040589 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:12.040597 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:44:12.040605 | orchestrator | } 2026-01-03 00:44:12.040612 | orchestrator | 2026-01-03 00:44:12.040620 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:44:12.040646 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.143) 0:01:02.705 ****** 2026-01-03 00:44:12.040654 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:12.040661 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:44:12.040669 | orchestrator | } 2026-01-03 00:44:12.040676 | orchestrator | 2026-01-03 00:44:12.040683 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:44:12.040691 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.141) 0:01:02.846 ****** 2026-01-03 00:44:12.040698 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:12.040706 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:44:12.040714 | orchestrator | } 2026-01-03 00:44:12.040722 | orchestrator | 2026-01-03 00:44:12.040732 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:44:12.040741 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.145) 0:01:02.991 ****** 2026-01-03 00:44:12.040749 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:12.040757 | orchestrator | 2026-01-03 00:44:12.040766 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:44:12.040774 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.600) 0:01:03.592 ****** 2026-01-03 00:44:12.040783 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:12.040792 | orchestrator | 2026-01-03 00:44:12.040801 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:44:12.040810 | orchestrator | Saturday 03 January 2026 00:44:08 +0000 (0:00:00.513) 0:01:04.106 ****** 2026-01-03 00:44:12.040817 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:12.040824 | orchestrator | 2026-01-03 00:44:12.040831 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:44:12.040838 | orchestrator | Saturday 03 January 2026 00:44:08 +0000 (0:00:00.687) 0:01:04.793 ****** 2026-01-03 00:44:12.040845 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:12.040853 | orchestrator | 2026-01-03 00:44:12.040860 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:44:12.040867 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.157) 0:01:04.950 ****** 2026-01-03 00:44:12.040874 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040881 | orchestrator | 2026-01-03 00:44:12.040889 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:44:12.040902 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.137) 0:01:05.088 ****** 2026-01-03 00:44:12.040910 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.040917 | orchestrator | 2026-01-03 00:44:12.040924 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:44:12.040951 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.123) 0:01:05.211 ****** 2026-01-03 00:44:12.040959 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:12.040966 | orchestrator |  "vgs_report": { 2026-01-03 00:44:12.040974 | orchestrator |  "vg": [] 2026-01-03 00:44:12.040997 | orchestrator |  } 2026-01-03 00:44:12.041004 | orchestrator | } 2026-01-03 00:44:12.041011 | orchestrator | 2026-01-03 00:44:12.041019 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:44:12.041026 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.138) 0:01:05.349 ****** 2026-01-03 00:44:12.041033 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041040 | orchestrator | 2026-01-03 00:44:12.041047 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:44:12.041054 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.141) 0:01:05.490 ****** 2026-01-03 00:44:12.041061 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041068 | orchestrator | 2026-01-03 00:44:12.041075 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:44:12.041082 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.135) 0:01:05.626 ****** 2026-01-03 00:44:12.041089 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041096 | orchestrator | 2026-01-03 00:44:12.041104 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:44:12.041111 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.135) 0:01:05.762 ****** 2026-01-03 00:44:12.041118 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041125 | orchestrator | 2026-01-03 00:44:12.041132 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:44:12.041139 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.132) 0:01:05.894 ****** 2026-01-03 00:44:12.041146 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041153 | orchestrator | 2026-01-03 00:44:12.041160 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:44:12.041167 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.132) 0:01:06.026 ****** 2026-01-03 00:44:12.041174 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041181 | orchestrator | 2026-01-03 00:44:12.041188 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:44:12.041195 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.131) 0:01:06.158 ****** 2026-01-03 00:44:12.041202 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041209 | orchestrator | 2026-01-03 00:44:12.041217 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:44:12.041224 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.130) 0:01:06.288 ****** 2026-01-03 00:44:12.041231 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041237 | orchestrator | 2026-01-03 00:44:12.041245 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:44:12.041252 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.303) 0:01:06.591 ****** 2026-01-03 00:44:12.041259 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041266 | orchestrator | 2026-01-03 00:44:12.041277 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:44:12.041284 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.145) 0:01:06.737 ****** 2026-01-03 00:44:12.041291 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041298 | orchestrator | 2026-01-03 00:44:12.041306 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:44:12.041320 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.138) 0:01:06.875 ****** 2026-01-03 00:44:12.041328 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041335 | orchestrator | 2026-01-03 00:44:12.041342 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:44:12.041349 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.138) 0:01:07.014 ****** 2026-01-03 00:44:12.041356 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041363 | orchestrator | 2026-01-03 00:44:12.041370 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:44:12.041377 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.131) 0:01:07.145 ****** 2026-01-03 00:44:12.041384 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041392 | orchestrator | 2026-01-03 00:44:12.041399 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:44:12.041406 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.128) 0:01:07.274 ****** 2026-01-03 00:44:12.041413 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041420 | orchestrator | 2026-01-03 00:44:12.041427 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:44:12.041434 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.138) 0:01:07.413 ****** 2026-01-03 00:44:12.041441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:12.041448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:12.041455 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041462 | orchestrator | 2026-01-03 00:44:12.041470 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:44:12.041477 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.147) 0:01:07.561 ****** 2026-01-03 00:44:12.041484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:12.041491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:12.041498 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:12.041505 | orchestrator | 2026-01-03 00:44:12.041512 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:44:12.041519 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.158) 0:01:07.719 ****** 2026-01-03 00:44:12.041532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.094368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095090 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095108 | orchestrator | 2026-01-03 00:44:15.095115 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:44:15.095121 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.149) 0:01:07.869 ****** 2026-01-03 00:44:15.095128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095142 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095152 | orchestrator | 2026-01-03 00:44:15.095161 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:44:15.095194 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.147) 0:01:08.017 ****** 2026-01-03 00:44:15.095202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095216 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095222 | orchestrator | 2026-01-03 00:44:15.095227 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:44:15.095232 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.146) 0:01:08.164 ****** 2026-01-03 00:44:15.095237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095258 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095263 | orchestrator | 2026-01-03 00:44:15.095267 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:44:15.095270 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.334) 0:01:08.498 ****** 2026-01-03 00:44:15.095274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095283 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095287 | orchestrator | 2026-01-03 00:44:15.095291 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:44:15.095295 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.158) 0:01:08.657 ****** 2026-01-03 00:44:15.095299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095307 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095311 | orchestrator | 2026-01-03 00:44:15.095314 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:44:15.095318 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.151) 0:01:08.808 ****** 2026-01-03 00:44:15.095322 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:15.095327 | orchestrator | 2026-01-03 00:44:15.095331 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:44:15.095335 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.529) 0:01:09.337 ****** 2026-01-03 00:44:15.095339 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:15.095343 | orchestrator | 2026-01-03 00:44:15.095347 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:44:15.095350 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.525) 0:01:09.863 ****** 2026-01-03 00:44:15.095354 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:15.095360 | orchestrator | 2026-01-03 00:44:15.095366 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:44:15.095376 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.146) 0:01:10.010 ****** 2026-01-03 00:44:15.095383 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'vg_name': 'ceph-45670551-be8c-5463-bb13-3841732d7282'}) 2026-01-03 00:44:15.095390 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'vg_name': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}) 2026-01-03 00:44:15.095402 | orchestrator | 2026-01-03 00:44:15.095408 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:44:15.095414 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.186) 0:01:10.197 ****** 2026-01-03 00:44:15.095437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095449 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095465 | orchestrator | 2026-01-03 00:44:15.095472 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:44:15.095479 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.164) 0:01:10.361 ****** 2026-01-03 00:44:15.095486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095498 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095504 | orchestrator | 2026-01-03 00:44:15.095510 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:44:15.095516 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.165) 0:01:10.527 ****** 2026-01-03 00:44:15.095522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'})  2026-01-03 00:44:15.095529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'})  2026-01-03 00:44:15.095536 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:15.095542 | orchestrator | 2026-01-03 00:44:15.095548 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:44:15.095555 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.178) 0:01:10.705 ****** 2026-01-03 00:44:15.095562 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:15.095569 | orchestrator |  "lvm_report": { 2026-01-03 00:44:15.095575 | orchestrator |  "lv": [ 2026-01-03 00:44:15.095582 | orchestrator |  { 2026-01-03 00:44:15.095594 | orchestrator |  "lv_name": "osd-block-45670551-be8c-5463-bb13-3841732d7282", 2026-01-03 00:44:15.095603 | orchestrator |  "vg_name": "ceph-45670551-be8c-5463-bb13-3841732d7282" 2026-01-03 00:44:15.095610 | orchestrator |  }, 2026-01-03 00:44:15.095616 | orchestrator |  { 2026-01-03 00:44:15.095675 | orchestrator |  "lv_name": "osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f", 2026-01-03 00:44:15.095681 | orchestrator |  "vg_name": "ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f" 2026-01-03 00:44:15.095685 | orchestrator |  } 2026-01-03 00:44:15.095689 | orchestrator |  ], 2026-01-03 00:44:15.095693 | orchestrator |  "pv": [ 2026-01-03 00:44:15.095699 | orchestrator |  { 2026-01-03 00:44:15.095705 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:44:15.095711 | orchestrator |  "vg_name": "ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f" 2026-01-03 00:44:15.095720 | orchestrator |  }, 2026-01-03 00:44:15.095728 | orchestrator |  { 2026-01-03 00:44:15.095735 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:44:15.095741 | orchestrator |  "vg_name": "ceph-45670551-be8c-5463-bb13-3841732d7282" 2026-01-03 00:44:15.095747 | orchestrator |  } 2026-01-03 00:44:15.095753 | orchestrator |  ] 2026-01-03 00:44:15.095766 | orchestrator |  } 2026-01-03 00:44:15.095773 | orchestrator | } 2026-01-03 00:44:15.095779 | orchestrator | 2026-01-03 00:44:15.095785 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:44:15.095792 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:44:15.095799 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:44:15.095806 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:44:15.095812 | orchestrator | 2026-01-03 00:44:15.095819 | orchestrator | 2026-01-03 00:44:15.095825 | orchestrator | 2026-01-03 00:44:15.095831 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:44:15.095838 | orchestrator | Saturday 03 January 2026 00:44:15 +0000 (0:00:00.168) 0:01:10.874 ****** 2026-01-03 00:44:15.095844 | orchestrator | =============================================================================== 2026-01-03 00:44:15.095851 | orchestrator | Create block VGs -------------------------------------------------------- 5.83s 2026-01-03 00:44:15.095857 | orchestrator | Create block LVs -------------------------------------------------------- 4.13s 2026-01-03 00:44:15.095864 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2026-01-03 00:44:15.095870 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-01-03 00:44:15.095876 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.67s 2026-01-03 00:44:15.095883 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-01-03 00:44:15.095890 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2026-01-03 00:44:15.095896 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2026-01-03 00:44:15.095911 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2026-01-03 00:44:15.569209 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2026-01-03 00:44:15.569294 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-01-03 00:44:15.569304 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-01-03 00:44:15.569312 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2026-01-03 00:44:15.569320 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-01-03 00:44:15.569327 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2026-01-03 00:44:15.569334 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-01-03 00:44:15.569341 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.68s 2026-01-03 00:44:15.569348 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-03 00:44:15.569355 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-03 00:44:15.569362 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-01-03 00:44:28.599794 | orchestrator | 2026-01-03 00:44:28 | INFO  | Task 504a6ae1-188e-4be3-be3d-b5cdeef8f736 (facts) was prepared for execution. 2026-01-03 00:44:28.599881 | orchestrator | 2026-01-03 00:44:28 | INFO  | It takes a moment until task 504a6ae1-188e-4be3-be3d-b5cdeef8f736 (facts) has been started and output is visible here. 2026-01-03 00:44:40.777291 | orchestrator | 2026-01-03 00:44:40.777441 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-03 00:44:40.777460 | orchestrator | 2026-01-03 00:44:40.777473 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:44:40.777484 | orchestrator | Saturday 03 January 2026 00:44:32 +0000 (0:00:00.254) 0:00:00.254 ****** 2026-01-03 00:44:40.777531 | orchestrator | ok: [testbed-manager] 2026-01-03 00:44:40.777545 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:44:40.777556 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:44:40.777566 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:44:40.777577 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:40.777587 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:40.777598 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:40.777642 | orchestrator | 2026-01-03 00:44:40.777662 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:44:40.777683 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:01.175) 0:00:01.429 ****** 2026-01-03 00:44:40.777702 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:44:40.777723 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:44:40.777735 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:44:40.777745 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:44:40.777756 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:40.777767 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:40.777778 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:40.777790 | orchestrator | 2026-01-03 00:44:40.777803 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:44:40.777816 | orchestrator | 2026-01-03 00:44:40.777828 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:44:40.777842 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:01.213) 0:00:02.643 ****** 2026-01-03 00:44:40.777854 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:44:40.777867 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:44:40.777879 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:44:40.777891 | orchestrator | ok: [testbed-manager] 2026-01-03 00:44:40.777904 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:40.777916 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:40.777929 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:40.777942 | orchestrator | 2026-01-03 00:44:40.777954 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:44:40.777966 | orchestrator | 2026-01-03 00:44:40.777979 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:44:40.777992 | orchestrator | Saturday 03 January 2026 00:44:39 +0000 (0:00:04.790) 0:00:07.433 ****** 2026-01-03 00:44:40.778005 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:44:40.778085 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:44:40.778106 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:44:40.778126 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:44:40.778144 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:40.778162 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:40.778181 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:40.778198 | orchestrator | 2026-01-03 00:44:40.778230 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:44:40.778250 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778271 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778290 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778309 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778326 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778344 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778414 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:44:40.778435 | orchestrator | 2026-01-03 00:44:40.778454 | orchestrator | 2026-01-03 00:44:40.778473 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:44:40.778491 | orchestrator | Saturday 03 January 2026 00:44:40 +0000 (0:00:00.508) 0:00:07.942 ****** 2026-01-03 00:44:40.778511 | orchestrator | =============================================================================== 2026-01-03 00:44:40.778529 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2026-01-03 00:44:40.778549 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-01-03 00:44:40.778568 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-01-03 00:44:40.778587 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-01-03 00:44:53.051499 | orchestrator | 2026-01-03 00:44:53 | INFO  | Task 80a76aee-fc3d-469d-bec0-fdc41a62a20d (frr) was prepared for execution. 2026-01-03 00:44:53.051708 | orchestrator | 2026-01-03 00:44:53 | INFO  | It takes a moment until task 80a76aee-fc3d-469d-bec0-fdc41a62a20d (frr) has been started and output is visible here. 2026-01-03 00:45:17.111401 | orchestrator | 2026-01-03 00:45:17.111518 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-03 00:45:17.111536 | orchestrator | 2026-01-03 00:45:17.111549 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-03 00:45:17.111579 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.225) 0:00:00.225 ****** 2026-01-03 00:45:17.111592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:45:17.111679 | orchestrator | 2026-01-03 00:45:17.111691 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-03 00:45:17.111702 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.177) 0:00:00.403 ****** 2026-01-03 00:45:17.111713 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:17.111725 | orchestrator | 2026-01-03 00:45:17.111736 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-03 00:45:17.111753 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:01.052) 0:00:01.456 ****** 2026-01-03 00:45:17.111764 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:17.111775 | orchestrator | 2026-01-03 00:45:17.111785 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-03 00:45:17.111796 | orchestrator | Saturday 03 January 2026 00:45:07 +0000 (0:00:08.865) 0:00:10.321 ****** 2026-01-03 00:45:17.111807 | orchestrator | ok: [testbed-manager] 2026-01-03 00:45:17.111818 | orchestrator | 2026-01-03 00:45:17.111829 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-03 00:45:17.111839 | orchestrator | Saturday 03 January 2026 00:45:08 +0000 (0:00:01.004) 0:00:11.326 ****** 2026-01-03 00:45:17.111850 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:17.111861 | orchestrator | 2026-01-03 00:45:17.111871 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-03 00:45:17.111882 | orchestrator | Saturday 03 January 2026 00:45:09 +0000 (0:00:00.934) 0:00:12.261 ****** 2026-01-03 00:45:17.111893 | orchestrator | ok: [testbed-manager] 2026-01-03 00:45:17.111903 | orchestrator | 2026-01-03 00:45:17.111914 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-03 00:45:17.111926 | orchestrator | Saturday 03 January 2026 00:45:10 +0000 (0:00:01.129) 0:00:13.391 ****** 2026-01-03 00:45:17.111939 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:45:17.111952 | orchestrator | 2026-01-03 00:45:17.111965 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-03 00:45:17.111979 | orchestrator | Saturday 03 January 2026 00:45:10 +0000 (0:00:00.155) 0:00:13.546 ****** 2026-01-03 00:45:17.112017 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:45:17.112030 | orchestrator | 2026-01-03 00:45:17.112042 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-03 00:45:17.112055 | orchestrator | Saturday 03 January 2026 00:45:10 +0000 (0:00:00.164) 0:00:13.710 ****** 2026-01-03 00:45:17.112067 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:17.112079 | orchestrator | 2026-01-03 00:45:17.112093 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-03 00:45:17.112106 | orchestrator | Saturday 03 January 2026 00:45:11 +0000 (0:00:00.972) 0:00:14.683 ****** 2026-01-03 00:45:17.112118 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-03 00:45:17.112131 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-03 00:45:17.112145 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-03 00:45:17.112158 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-03 00:45:17.112170 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-03 00:45:17.112183 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-03 00:45:17.112196 | orchestrator | 2026-01-03 00:45:17.112208 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-03 00:45:17.112221 | orchestrator | Saturday 03 January 2026 00:45:13 +0000 (0:00:02.188) 0:00:16.871 ****** 2026-01-03 00:45:17.112233 | orchestrator | ok: [testbed-manager] 2026-01-03 00:45:17.112245 | orchestrator | 2026-01-03 00:45:17.112257 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-03 00:45:17.112270 | orchestrator | Saturday 03 January 2026 00:45:15 +0000 (0:00:01.547) 0:00:18.418 ****** 2026-01-03 00:45:17.112282 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:17.112294 | orchestrator | 2026-01-03 00:45:17.112307 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:45:17.112320 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:17.112331 | orchestrator | 2026-01-03 00:45:17.112342 | orchestrator | 2026-01-03 00:45:17.112353 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:45:17.112364 | orchestrator | Saturday 03 January 2026 00:45:16 +0000 (0:00:01.369) 0:00:19.788 ****** 2026-01-03 00:45:17.112374 | orchestrator | =============================================================================== 2026-01-03 00:45:17.112385 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.87s 2026-01-03 00:45:17.112396 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2026-01-03 00:45:17.112406 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.55s 2026-01-03 00:45:17.112417 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.37s 2026-01-03 00:45:17.112428 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.13s 2026-01-03 00:45:17.112456 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.05s 2026-01-03 00:45:17.112468 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-01-03 00:45:17.112479 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-01-03 00:45:17.112489 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.93s 2026-01-03 00:45:17.112500 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.18s 2026-01-03 00:45:17.112511 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-03 00:45:17.112521 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-01-03 00:45:17.375432 | orchestrator | 2026-01-03 00:45:17.378386 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 3 00:45:17 UTC 2026 2026-01-03 00:45:17.378587 | orchestrator | 2026-01-03 00:45:19.303301 | orchestrator | 2026-01-03 00:45:19 | INFO  | Collection nutshell is prepared for execution 2026-01-03 00:45:19.303394 | orchestrator | 2026-01-03 00:45:19 | INFO  | A [0] - dotfiles 2026-01-03 00:45:29.334948 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - homer 2026-01-03 00:45:29.335199 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - netdata 2026-01-03 00:45:29.335232 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - openstackclient 2026-01-03 00:45:29.335265 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - phpmyadmin 2026-01-03 00:45:29.335377 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - common 2026-01-03 00:45:29.339555 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- loadbalancer 2026-01-03 00:45:29.339647 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [2] --- opensearch 2026-01-03 00:45:29.339898 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [2] --- mariadb-ng 2026-01-03 00:45:29.340101 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [3] ---- horizon 2026-01-03 00:45:29.340361 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [3] ---- keystone 2026-01-03 00:45:29.340813 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- neutron 2026-01-03 00:45:29.341013 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ wait-for-nova 2026-01-03 00:45:29.341470 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [6] ------- octavia 2026-01-03 00:45:29.343428 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- barbican 2026-01-03 00:45:29.343614 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- designate 2026-01-03 00:45:29.343778 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- ironic 2026-01-03 00:45:29.344127 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- placement 2026-01-03 00:45:29.344308 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- magnum 2026-01-03 00:45:29.345194 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- openvswitch 2026-01-03 00:45:29.345247 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [2] --- ovn 2026-01-03 00:45:29.345808 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- memcached 2026-01-03 00:45:29.346125 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- redis 2026-01-03 00:45:29.346349 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- rabbitmq-ng 2026-01-03 00:45:29.347029 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - kubernetes 2026-01-03 00:45:29.349295 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- kubeconfig 2026-01-03 00:45:29.349582 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- copy-kubeconfig 2026-01-03 00:45:29.349633 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [0] - ceph 2026-01-03 00:45:29.352233 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [1] -- ceph-pools 2026-01-03 00:45:29.352296 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [2] --- copy-ceph-keys 2026-01-03 00:45:29.352328 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [3] ---- cephclient 2026-01-03 00:45:29.352347 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-03 00:45:29.352365 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- wait-for-keystone 2026-01-03 00:45:29.352848 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-03 00:45:29.352893 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ glance 2026-01-03 00:45:29.352946 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ cinder 2026-01-03 00:45:29.352966 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ nova 2026-01-03 00:45:29.353457 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [4] ----- prometheus 2026-01-03 00:45:29.353497 | orchestrator | 2026-01-03 00:45:29 | INFO  | A [5] ------ grafana 2026-01-03 00:45:29.554414 | orchestrator | 2026-01-03 00:45:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-03 00:45:29.554506 | orchestrator | 2026-01-03 00:45:29 | INFO  | Tasks are running in the background 2026-01-03 00:45:32.420153 | orchestrator | 2026-01-03 00:45:32 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-03 00:45:34.547947 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:34.548265 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:34.548979 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:34.549792 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:34.551383 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:34.551976 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:34.551994 | orchestrator | 2026-01-03 00:45:34 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:34.552056 | orchestrator | 2026-01-03 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:37.598062 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:37.599892 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:37.600643 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:37.602048 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:37.604524 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:37.605010 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:37.606475 | orchestrator | 2026-01-03 00:45:37 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:37.606497 | orchestrator | 2026-01-03 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:40.668691 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:40.668877 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:40.669349 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:40.669978 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:40.670756 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:40.672089 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:40.672730 | orchestrator | 2026-01-03 00:45:40 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:40.672865 | orchestrator | 2026-01-03 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:43.727938 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:43.728073 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:43.728086 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:43.728094 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:43.728256 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:43.729187 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:43.729835 | orchestrator | 2026-01-03 00:45:43 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:43.729949 | orchestrator | 2026-01-03 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:46.766380 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:46.766488 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:46.766915 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:46.767444 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:46.768048 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:46.768557 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:46.770241 | orchestrator | 2026-01-03 00:45:46 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:46.770303 | orchestrator | 2026-01-03 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:49.821702 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:49.821787 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:49.821800 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:49.821809 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:49.821815 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:49.821823 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:49.821829 | orchestrator | 2026-01-03 00:45:49 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:49.821836 | orchestrator | 2026-01-03 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:52.857062 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:52.857150 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:52.860508 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:52.860661 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:52.860672 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:52.866779 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state STARTED 2026-01-03 00:45:52.866843 | orchestrator | 2026-01-03 00:45:52 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:52.866851 | orchestrator | 2026-01-03 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:56.108701 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:56.110444 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:56.110486 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:56.110495 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:56.112481 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:45:56.113276 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:56.113310 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task 69543c5f-3bee-4f70-844a-37200a090178 is in state SUCCESS 2026-01-03 00:45:56.114059 | orchestrator | 2026-01-03 00:45:56.114121 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-03 00:45:56.114132 | orchestrator | 2026-01-03 00:45:56.114140 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-03 00:45:56.114147 | orchestrator | Saturday 03 January 2026 00:45:40 +0000 (0:00:00.539) 0:00:00.539 ****** 2026-01-03 00:45:56.114155 | orchestrator | changed: [testbed-manager] 2026-01-03 00:45:56.114162 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:45:56.114168 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:45:56.114175 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:45:56.114182 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:45:56.114188 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:45:56.114196 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:45:56.114202 | orchestrator | 2026-01-03 00:45:56.114209 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-03 00:45:56.114216 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:04.161) 0:00:04.701 ****** 2026-01-03 00:45:56.114223 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:45:56.114230 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:45:56.114237 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:45:56.114243 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:45:56.114249 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:45:56.114255 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:45:56.114262 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:45:56.114267 | orchestrator | 2026-01-03 00:45:56.114274 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-03 00:45:56.114281 | orchestrator | Saturday 03 January 2026 00:45:46 +0000 (0:00:01.086) 0:00:05.787 ****** 2026-01-03 00:45:56.114291 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.633127', 'end': '2026-01-03 00:45:45.639767', 'delta': '0:00:00.006640', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114328 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.575575', 'end': '2026-01-03 00:45:45.584912', 'delta': '0:00:00.009337', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114336 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.585317', 'end': '2026-01-03 00:45:45.596095', 'delta': '0:00:00.010778', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114366 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.643986', 'end': '2026-01-03 00:45:45.651729', 'delta': '0:00:00.007743', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114577 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.691741', 'end': '2026-01-03 00:45:45.700582', 'delta': '0:00:00.008841', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114620 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.844211', 'end': '2026-01-03 00:45:45.852785', 'delta': '0:00:00.008574', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114639 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:45:45.845716', 'end': '2026-01-03 00:45:45.851334', 'delta': '0:00:00.005618', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:45:56.114644 | orchestrator | 2026-01-03 00:45:56.114649 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-03 00:45:56.114654 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:02.104) 0:00:07.891 ****** 2026-01-03 00:45:56.114659 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:45:56.114664 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:45:56.114670 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:45:56.114674 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:45:56.114679 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:45:56.114683 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:45:56.114688 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:45:56.114693 | orchestrator | 2026-01-03 00:45:56.114697 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-03 00:45:56.114702 | orchestrator | Saturday 03 January 2026 00:45:49 +0000 (0:00:01.694) 0:00:09.586 ****** 2026-01-03 00:45:56.114707 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:45:56.114712 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:45:56.114716 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:45:56.114721 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:45:56.114726 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:45:56.114730 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:45:56.114735 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:45:56.114740 | orchestrator | 2026-01-03 00:45:56.114745 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:45:56.114757 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114763 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114768 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114772 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114781 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114785 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114790 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:45:56.114794 | orchestrator | 2026-01-03 00:45:56.114799 | orchestrator | 2026-01-03 00:45:56.114804 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:45:56.114808 | orchestrator | Saturday 03 January 2026 00:45:53 +0000 (0:00:03.155) 0:00:12.741 ****** 2026-01-03 00:45:56.114813 | orchestrator | =============================================================================== 2026-01-03 00:45:56.114818 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.16s 2026-01-03 00:45:56.114822 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.16s 2026-01-03 00:45:56.114827 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.10s 2026-01-03 00:45:56.114832 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.69s 2026-01-03 00:45:56.114837 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.09s 2026-01-03 00:45:56.115037 | orchestrator | 2026-01-03 00:45:56 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:56.115050 | orchestrator | 2026-01-03 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:45:59.402857 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:45:59.402926 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:45:59.402932 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:45:59.402937 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:45:59.402941 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:45:59.402945 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:45:59.402962 | orchestrator | 2026-01-03 00:45:59 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:45:59.402967 | orchestrator | 2026-01-03 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:02.364314 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:02.365103 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:02.365731 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:02.366309 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:02.368377 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:02.368409 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:02.369044 | orchestrator | 2026-01-03 00:46:02 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:02.369072 | orchestrator | 2026-01-03 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:05.429077 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:05.429200 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:05.429211 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:05.429218 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:05.429225 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:05.429231 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:05.429237 | orchestrator | 2026-01-03 00:46:05 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:05.429243 | orchestrator | 2026-01-03 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:08.424504 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:08.424751 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:08.425212 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:08.427316 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:08.427774 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:08.429742 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:08.430364 | orchestrator | 2026-01-03 00:46:08 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:08.430401 | orchestrator | 2026-01-03 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:11.461901 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:11.462000 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:11.462060 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:11.462497 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:11.464127 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:11.464228 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:11.465035 | orchestrator | 2026-01-03 00:46:11 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:11.465100 | orchestrator | 2026-01-03 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:14.493231 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:14.493315 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:14.493322 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:14.493327 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:14.494344 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:14.496547 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:14.510333 | orchestrator | 2026-01-03 00:46:14 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:14.510419 | orchestrator | 2026-01-03 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:17.657702 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:17.657774 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:17.657780 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:17.657785 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:17.657789 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:17.657794 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state STARTED 2026-01-03 00:46:17.657798 | orchestrator | 2026-01-03 00:46:17 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:17.657802 | orchestrator | 2026-01-03 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:20.801143 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:20.801231 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:20.801239 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:20.801245 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:20.801251 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:20.801256 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task 6b1316be-4413-422d-bfcd-87c0c349f3ac is in state SUCCESS 2026-01-03 00:46:20.801261 | orchestrator | 2026-01-03 00:46:20 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:20.801267 | orchestrator | 2026-01-03 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:23.677047 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:23.677819 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:23.680458 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:23.686310 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:23.686394 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:23.687983 | orchestrator | 2026-01-03 00:46:23 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:23.688042 | orchestrator | 2026-01-03 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:26.719191 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:26.720615 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:26.721061 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:26.723501 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:26.724505 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:26.725803 | orchestrator | 2026-01-03 00:46:26 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state STARTED 2026-01-03 00:46:26.725830 | orchestrator | 2026-01-03 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:29.767261 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:29.771719 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:29.771773 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:29.771779 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:29.771784 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:29.771788 | orchestrator | 2026-01-03 00:46:29 | INFO  | Task 591cf906-6784-44b4-951f-f65e7fa54253 is in state SUCCESS 2026-01-03 00:46:29.771792 | orchestrator | 2026-01-03 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:32.807806 | orchestrator | 2026-01-03 00:46:32 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:32.809612 | orchestrator | 2026-01-03 00:46:32 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:32.811701 | orchestrator | 2026-01-03 00:46:32 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:32.813861 | orchestrator | 2026-01-03 00:46:32 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:32.816157 | orchestrator | 2026-01-03 00:46:32 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:32.816205 | orchestrator | 2026-01-03 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:35.863248 | orchestrator | 2026-01-03 00:46:35 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:35.863691 | orchestrator | 2026-01-03 00:46:35 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:35.866195 | orchestrator | 2026-01-03 00:46:35 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:35.869343 | orchestrator | 2026-01-03 00:46:35 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:35.869471 | orchestrator | 2026-01-03 00:46:35 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:35.869483 | orchestrator | 2026-01-03 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:38.922256 | orchestrator | 2026-01-03 00:46:38 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:38.922403 | orchestrator | 2026-01-03 00:46:38 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:38.924788 | orchestrator | 2026-01-03 00:46:38 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:38.926474 | orchestrator | 2026-01-03 00:46:38 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:38.930583 | orchestrator | 2026-01-03 00:46:38 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:38.930773 | orchestrator | 2026-01-03 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:42.047181 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:42.048713 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:42.049929 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:42.051273 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:42.052128 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:42.052180 | orchestrator | 2026-01-03 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:45.096888 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:45.098133 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:45.099053 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:45.099649 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:45.100647 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:45.101439 | orchestrator | 2026-01-03 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:48.140820 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:48.140886 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:48.141072 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:48.142241 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:48.142932 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:48.142964 | orchestrator | 2026-01-03 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:51.184290 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:51.184655 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:51.185386 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:51.185880 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:51.187980 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:51.188021 | orchestrator | 2026-01-03 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:54.234607 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:54.234660 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:54.235417 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:54.246865 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:54.246907 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:54.246912 | orchestrator | 2026-01-03 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:57.282108 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:46:57.284778 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:46:57.287410 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:46:57.287970 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:46:57.289587 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state STARTED 2026-01-03 00:46:57.289739 | orchestrator | 2026-01-03 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:00.340140 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state STARTED 2026-01-03 00:47:00.340539 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:00.341717 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:00.342536 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:00.343424 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 6b8260c3-41ec-4dae-af45-b17258ffee21 is in state SUCCESS 2026-01-03 00:47:00.343446 | orchestrator | 2026-01-03 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:00.344425 | orchestrator | 2026-01-03 00:47:00.344455 | orchestrator | 2026-01-03 00:47:00.344463 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-03 00:47:00.344469 | orchestrator | 2026-01-03 00:47:00.344474 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-03 00:47:00.344481 | orchestrator | Saturday 03 January 2026 00:45:42 +0000 (0:00:00.647) 0:00:00.647 ****** 2026-01-03 00:47:00.344486 | orchestrator | ok: [testbed-manager] => { 2026-01-03 00:47:00.344493 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-03 00:47:00.344502 | orchestrator | } 2026-01-03 00:47:00.344514 | orchestrator | 2026-01-03 00:47:00.344520 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-03 00:47:00.344526 | orchestrator | Saturday 03 January 2026 00:45:42 +0000 (0:00:00.383) 0:00:01.031 ****** 2026-01-03 00:47:00.344531 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.344537 | orchestrator | 2026-01-03 00:47:00.344542 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-03 00:47:00.344561 | orchestrator | Saturday 03 January 2026 00:45:44 +0000 (0:00:01.501) 0:00:02.532 ****** 2026-01-03 00:47:00.344565 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-03 00:47:00.344571 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-03 00:47:00.344576 | orchestrator | 2026-01-03 00:47:00.344597 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-03 00:47:00.344600 | orchestrator | Saturday 03 January 2026 00:45:46 +0000 (0:00:01.667) 0:00:04.200 ****** 2026-01-03 00:47:00.344614 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344618 | orchestrator | 2026-01-03 00:47:00.344621 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-03 00:47:00.344625 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:02.788) 0:00:06.989 ****** 2026-01-03 00:47:00.344628 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344631 | orchestrator | 2026-01-03 00:47:00.344635 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-03 00:47:00.344638 | orchestrator | Saturday 03 January 2026 00:45:51 +0000 (0:00:02.431) 0:00:09.420 ****** 2026-01-03 00:47:00.344641 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-03 00:47:00.344645 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.344648 | orchestrator | 2026-01-03 00:47:00.344651 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-03 00:47:00.344654 | orchestrator | Saturday 03 January 2026 00:46:16 +0000 (0:00:25.295) 0:00:34.716 ****** 2026-01-03 00:47:00.344657 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344661 | orchestrator | 2026-01-03 00:47:00.344664 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:00.344667 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:00.344671 | orchestrator | 2026-01-03 00:47:00.344675 | orchestrator | 2026-01-03 00:47:00.344678 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:00.344681 | orchestrator | Saturday 03 January 2026 00:46:18 +0000 (0:00:02.100) 0:00:36.817 ****** 2026-01-03 00:47:00.344684 | orchestrator | =============================================================================== 2026-01-03 00:47:00.344687 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.30s 2026-01-03 00:47:00.344691 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.79s 2026-01-03 00:47:00.344694 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.43s 2026-01-03 00:47:00.344697 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.10s 2026-01-03 00:47:00.344700 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.67s 2026-01-03 00:47:00.344703 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.50s 2026-01-03 00:47:00.344706 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.38s 2026-01-03 00:47:00.344710 | orchestrator | 2026-01-03 00:47:00.344713 | orchestrator | 2026-01-03 00:47:00.344717 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-03 00:47:00.344720 | orchestrator | 2026-01-03 00:47:00.344723 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-03 00:47:00.344726 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.472) 0:00:00.472 ****** 2026-01-03 00:47:00.344729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-03 00:47:00.344735 | orchestrator | 2026-01-03 00:47:00.344740 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-03 00:47:00.344747 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.339) 0:00:00.812 ****** 2026-01-03 00:47:00.344754 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-03 00:47:00.344759 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-03 00:47:00.344764 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-03 00:47:00.344769 | orchestrator | 2026-01-03 00:47:00.344774 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-03 00:47:00.344779 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:01.920) 0:00:02.733 ****** 2026-01-03 00:47:00.344789 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344795 | orchestrator | 2026-01-03 00:47:00.344800 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-03 00:47:00.344805 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:02.412) 0:00:05.145 ****** 2026-01-03 00:47:00.344817 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-03 00:47:00.344823 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.344828 | orchestrator | 2026-01-03 00:47:00.344833 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-03 00:47:00.344839 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:33.627) 0:00:38.772 ****** 2026-01-03 00:47:00.344844 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344849 | orchestrator | 2026-01-03 00:47:00.344856 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-03 00:47:00.344862 | orchestrator | Saturday 03 January 2026 00:46:23 +0000 (0:00:01.410) 0:00:40.183 ****** 2026-01-03 00:47:00.344867 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.344872 | orchestrator | 2026-01-03 00:47:00.344877 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-03 00:47:00.344882 | orchestrator | Saturday 03 January 2026 00:46:23 +0000 (0:00:00.627) 0:00:40.810 ****** 2026-01-03 00:47:00.344888 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344893 | orchestrator | 2026-01-03 00:47:00.344898 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-03 00:47:00.344903 | orchestrator | Saturday 03 January 2026 00:46:25 +0000 (0:00:01.946) 0:00:42.757 ****** 2026-01-03 00:47:00.344908 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344914 | orchestrator | 2026-01-03 00:47:00.344919 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-03 00:47:00.344924 | orchestrator | Saturday 03 January 2026 00:46:26 +0000 (0:00:00.985) 0:00:43.742 ****** 2026-01-03 00:47:00.344929 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.344935 | orchestrator | 2026-01-03 00:47:00.344940 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-03 00:47:00.344945 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.469) 0:00:44.211 ****** 2026-01-03 00:47:00.344950 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.344955 | orchestrator | 2026-01-03 00:47:00.344960 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:00.344965 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:00.344971 | orchestrator | 2026-01-03 00:47:00.344976 | orchestrator | 2026-01-03 00:47:00.344981 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:00.344986 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.444) 0:00:44.656 ****** 2026-01-03 00:47:00.344992 | orchestrator | =============================================================================== 2026-01-03 00:47:00.344997 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.63s 2026-01-03 00:47:00.345002 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.41s 2026-01-03 00:47:00.345008 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.95s 2026-01-03 00:47:00.345013 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.92s 2026-01-03 00:47:00.345018 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.41s 2026-01-03 00:47:00.345023 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.99s 2026-01-03 00:47:00.345028 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2026-01-03 00:47:00.345034 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.47s 2026-01-03 00:47:00.345039 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-01-03 00:47:00.345048 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.34s 2026-01-03 00:47:00.345053 | orchestrator | 2026-01-03 00:47:00.345058 | orchestrator | 2026-01-03 00:47:00.345063 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-03 00:47:00.345068 | orchestrator | 2026-01-03 00:47:00.345072 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-03 00:47:00.345077 | orchestrator | Saturday 03 January 2026 00:45:58 +0000 (0:00:00.197) 0:00:00.197 ****** 2026-01-03 00:47:00.345083 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.345088 | orchestrator | 2026-01-03 00:47:00.345093 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-03 00:47:00.345098 | orchestrator | Saturday 03 January 2026 00:46:00 +0000 (0:00:02.552) 0:00:02.749 ****** 2026-01-03 00:47:00.345104 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-03 00:47:00.345109 | orchestrator | 2026-01-03 00:47:00.345114 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-03 00:47:00.345119 | orchestrator | Saturday 03 January 2026 00:46:01 +0000 (0:00:00.800) 0:00:03.550 ****** 2026-01-03 00:47:00.345125 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.345130 | orchestrator | 2026-01-03 00:47:00.345135 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-03 00:47:00.345140 | orchestrator | Saturday 03 January 2026 00:46:02 +0000 (0:00:01.185) 0:00:04.736 ****** 2026-01-03 00:47:00.345145 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-03 00:47:00.345151 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:00.345156 | orchestrator | 2026-01-03 00:47:00.345161 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-03 00:47:00.345166 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:53.317) 0:00:58.054 ****** 2026-01-03 00:47:00.345172 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:00.345177 | orchestrator | 2026-01-03 00:47:00.345182 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:00.345188 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:00.345193 | orchestrator | 2026-01-03 00:47:00.345198 | orchestrator | 2026-01-03 00:47:00.345203 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:00.345212 | orchestrator | Saturday 03 January 2026 00:46:59 +0000 (0:00:03.408) 0:01:01.462 ****** 2026-01-03 00:47:00.345217 | orchestrator | =============================================================================== 2026-01-03 00:47:00.345223 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.32s 2026-01-03 00:47:00.345230 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.41s 2026-01-03 00:47:00.345236 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.55s 2026-01-03 00:47:00.345241 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.19s 2026-01-03 00:47:00.345246 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2026-01-03 00:47:03.389288 | orchestrator | 2026-01-03 00:47:03 | INFO  | Task dcbf3c09-068e-4074-af69-96de91e96a02 is in state SUCCESS 2026-01-03 00:47:03.389803 | orchestrator | 2026-01-03 00:47:03.389828 | orchestrator | 2026-01-03 00:47:03.389834 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:47:03.389841 | orchestrator | 2026-01-03 00:47:03.389846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:47:03.389852 | orchestrator | Saturday 03 January 2026 00:45:40 +0000 (0:00:00.349) 0:00:00.349 ****** 2026-01-03 00:47:03.389858 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-03 00:47:03.389864 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-03 00:47:03.389869 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-03 00:47:03.389888 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-03 00:47:03.389893 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-03 00:47:03.389898 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-03 00:47:03.389903 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-03 00:47:03.389909 | orchestrator | 2026-01-03 00:47:03.389915 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-03 00:47:03.389921 | orchestrator | 2026-01-03 00:47:03.389926 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-03 00:47:03.389933 | orchestrator | Saturday 03 January 2026 00:45:42 +0000 (0:00:02.433) 0:00:02.783 ****** 2026-01-03 00:47:03.389945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:03.389953 | orchestrator | 2026-01-03 00:47:03.389959 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-03 00:47:03.389964 | orchestrator | Saturday 03 January 2026 00:45:44 +0000 (0:00:01.140) 0:00:03.924 ****** 2026-01-03 00:47:03.389969 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:03.389975 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:03.389980 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:03.389985 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:03.389990 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:03.389995 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:03.389999 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:03.390005 | orchestrator | 2026-01-03 00:47:03.390010 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-03 00:47:03.390067 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:01.653) 0:00:05.577 ****** 2026-01-03 00:47:03.390073 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:03.390078 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:03.390083 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:03.390088 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:03.390092 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:03.390097 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:03.390101 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:03.390106 | orchestrator | 2026-01-03 00:47:03.390111 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-03 00:47:03.390116 | orchestrator | Saturday 03 January 2026 00:45:49 +0000 (0:00:04.051) 0:00:09.629 ****** 2026-01-03 00:47:03.390120 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:03.390125 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:03.390130 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:03.390135 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:03.390147 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:03.390153 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:03.390157 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390162 | orchestrator | 2026-01-03 00:47:03.390166 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-03 00:47:03.390171 | orchestrator | Saturday 03 January 2026 00:45:51 +0000 (0:00:01.834) 0:00:11.463 ****** 2026-01-03 00:47:03.390175 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:03.390180 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:03.390185 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:03.390190 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:03.390196 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:03.390201 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:03.390204 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390207 | orchestrator | 2026-01-03 00:47:03.390210 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-03 00:47:03.390213 | orchestrator | Saturday 03 January 2026 00:46:04 +0000 (0:00:13.011) 0:00:24.475 ****** 2026-01-03 00:47:03.390224 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:03.390227 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:03.390230 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:03.390233 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:03.390238 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:03.390243 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:03.390248 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390253 | orchestrator | 2026-01-03 00:47:03.390257 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-03 00:47:03.390263 | orchestrator | Saturday 03 January 2026 00:46:40 +0000 (0:00:35.501) 0:00:59.976 ****** 2026-01-03 00:47:03.390268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:03.390274 | orchestrator | 2026-01-03 00:47:03.390284 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-03 00:47:03.390290 | orchestrator | Saturday 03 January 2026 00:46:41 +0000 (0:00:01.606) 0:01:01.583 ****** 2026-01-03 00:47:03.390294 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-03 00:47:03.390297 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-03 00:47:03.390300 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-03 00:47:03.390303 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-03 00:47:03.390314 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-03 00:47:03.390318 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-03 00:47:03.390321 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-03 00:47:03.390324 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-03 00:47:03.390327 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-03 00:47:03.390330 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-03 00:47:03.390333 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-03 00:47:03.390336 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-03 00:47:03.390339 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-03 00:47:03.390342 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-03 00:47:03.390345 | orchestrator | 2026-01-03 00:47:03.390348 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-03 00:47:03.390352 | orchestrator | Saturday 03 January 2026 00:46:46 +0000 (0:00:04.984) 0:01:06.567 ****** 2026-01-03 00:47:03.390355 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:03.390358 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:03.390361 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:03.390365 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:03.390371 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:03.390376 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:03.390381 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:03.390386 | orchestrator | 2026-01-03 00:47:03.390391 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-03 00:47:03.390397 | orchestrator | Saturday 03 January 2026 00:46:47 +0000 (0:00:00.914) 0:01:07.482 ****** 2026-01-03 00:47:03.390404 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390412 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:03.390416 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:03.390421 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:03.390426 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:03.390431 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:03.390436 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:03.390441 | orchestrator | 2026-01-03 00:47:03.390446 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-03 00:47:03.390451 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:01.571) 0:01:09.053 ****** 2026-01-03 00:47:03.390462 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:03.390467 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:03.390473 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:03.390478 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:03.390483 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:03.390489 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:03.390493 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:03.390497 | orchestrator | 2026-01-03 00:47:03.390501 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-03 00:47:03.390504 | orchestrator | Saturday 03 January 2026 00:46:50 +0000 (0:00:01.320) 0:01:10.374 ****** 2026-01-03 00:47:03.390508 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:03.390512 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:03.390515 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:03.390519 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:03.390522 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:03.390526 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:03.390529 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:03.390533 | orchestrator | 2026-01-03 00:47:03.390537 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-03 00:47:03.390540 | orchestrator | Saturday 03 January 2026 00:46:52 +0000 (0:00:02.168) 0:01:12.542 ****** 2026-01-03 00:47:03.390562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-03 00:47:03.390568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:03.390573 | orchestrator | 2026-01-03 00:47:03.390576 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-03 00:47:03.390580 | orchestrator | Saturday 03 January 2026 00:46:54 +0000 (0:00:01.642) 0:01:14.185 ****** 2026-01-03 00:47:03.390584 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390587 | orchestrator | 2026-01-03 00:47:03.390591 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-03 00:47:03.390595 | orchestrator | Saturday 03 January 2026 00:46:57 +0000 (0:00:03.124) 0:01:17.309 ****** 2026-01-03 00:47:03.390598 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:03.390602 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:03.390606 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:03.390609 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:03.390613 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:03.390617 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:03.390622 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:03.390627 | orchestrator | 2026-01-03 00:47:03.390632 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:03.390637 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390647 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390656 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390661 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390672 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390677 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390682 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:03.390692 | orchestrator | 2026-01-03 00:47:03.390697 | orchestrator | 2026-01-03 00:47:03.390701 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:03.390705 | orchestrator | Saturday 03 January 2026 00:47:00 +0000 (0:00:03.317) 0:01:20.626 ****** 2026-01-03 00:47:03.390709 | orchestrator | =============================================================================== 2026-01-03 00:47:03.390713 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 35.50s 2026-01-03 00:47:03.390717 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.01s 2026-01-03 00:47:03.390720 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.98s 2026-01-03 00:47:03.390724 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.05s 2026-01-03 00:47:03.390728 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.32s 2026-01-03 00:47:03.390731 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.12s 2026-01-03 00:47:03.390735 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.43s 2026-01-03 00:47:03.390739 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.17s 2026-01-03 00:47:03.390752 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.83s 2026-01-03 00:47:03.390756 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.65s 2026-01-03 00:47:03.390760 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.64s 2026-01-03 00:47:03.390764 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.61s 2026-01-03 00:47:03.390767 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.57s 2026-01-03 00:47:03.390771 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.32s 2026-01-03 00:47:03.390775 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.14s 2026-01-03 00:47:03.390778 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.91s 2026-01-03 00:47:03.391847 | orchestrator | 2026-01-03 00:47:03 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:03.394426 | orchestrator | 2026-01-03 00:47:03 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:03.394477 | orchestrator | 2026-01-03 00:47:03 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:03.394485 | orchestrator | 2026-01-03 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:06.456657 | orchestrator | 2026-01-03 00:47:06 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:06.457509 | orchestrator | 2026-01-03 00:47:06 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:06.460822 | orchestrator | 2026-01-03 00:47:06 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:06.460895 | orchestrator | 2026-01-03 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:09.497626 | orchestrator | 2026-01-03 00:47:09 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:09.499365 | orchestrator | 2026-01-03 00:47:09 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:09.500319 | orchestrator | 2026-01-03 00:47:09 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:09.500361 | orchestrator | 2026-01-03 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:12.549729 | orchestrator | 2026-01-03 00:47:12 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:12.551158 | orchestrator | 2026-01-03 00:47:12 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:12.552170 | orchestrator | 2026-01-03 00:47:12 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:12.552749 | orchestrator | 2026-01-03 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:15.603775 | orchestrator | 2026-01-03 00:47:15 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:15.605457 | orchestrator | 2026-01-03 00:47:15 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:15.607146 | orchestrator | 2026-01-03 00:47:15 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:15.607193 | orchestrator | 2026-01-03 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:18.657755 | orchestrator | 2026-01-03 00:47:18 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:18.660470 | orchestrator | 2026-01-03 00:47:18 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:18.662463 | orchestrator | 2026-01-03 00:47:18 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:18.663336 | orchestrator | 2026-01-03 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:21.710417 | orchestrator | 2026-01-03 00:47:21 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:21.711701 | orchestrator | 2026-01-03 00:47:21 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:21.712190 | orchestrator | 2026-01-03 00:47:21 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:21.712328 | orchestrator | 2026-01-03 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:24.767914 | orchestrator | 2026-01-03 00:47:24 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:24.768025 | orchestrator | 2026-01-03 00:47:24 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:24.768040 | orchestrator | 2026-01-03 00:47:24 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:24.768503 | orchestrator | 2026-01-03 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:27.817783 | orchestrator | 2026-01-03 00:47:27 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:27.819453 | orchestrator | 2026-01-03 00:47:27 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:27.821168 | orchestrator | 2026-01-03 00:47:27 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:27.821329 | orchestrator | 2026-01-03 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:30.867802 | orchestrator | 2026-01-03 00:47:30 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:30.869316 | orchestrator | 2026-01-03 00:47:30 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:30.871762 | orchestrator | 2026-01-03 00:47:30 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:30.872308 | orchestrator | 2026-01-03 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:33.911925 | orchestrator | 2026-01-03 00:47:33 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:33.914046 | orchestrator | 2026-01-03 00:47:33 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:33.917226 | orchestrator | 2026-01-03 00:47:33 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:33.917495 | orchestrator | 2026-01-03 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:36.966661 | orchestrator | 2026-01-03 00:47:36 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:36.968710 | orchestrator | 2026-01-03 00:47:36 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:36.970411 | orchestrator | 2026-01-03 00:47:36 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:36.970466 | orchestrator | 2026-01-03 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:40.008576 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:40.008936 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:40.011791 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:40.011859 | orchestrator | 2026-01-03 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:43.053405 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:43.056274 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:43.057662 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:43.058144 | orchestrator | 2026-01-03 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:46.092417 | orchestrator | 2026-01-03 00:47:46 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:46.093864 | orchestrator | 2026-01-03 00:47:46 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:46.096401 | orchestrator | 2026-01-03 00:47:46 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:46.096452 | orchestrator | 2026-01-03 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:49.139013 | orchestrator | 2026-01-03 00:47:49 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:49.145183 | orchestrator | 2026-01-03 00:47:49 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:49.146722 | orchestrator | 2026-01-03 00:47:49 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:49.147485 | orchestrator | 2026-01-03 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:52.190813 | orchestrator | 2026-01-03 00:47:52 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:52.191976 | orchestrator | 2026-01-03 00:47:52 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:52.193390 | orchestrator | 2026-01-03 00:47:52 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:52.193465 | orchestrator | 2026-01-03 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:55.231489 | orchestrator | 2026-01-03 00:47:55 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:55.233861 | orchestrator | 2026-01-03 00:47:55 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:55.234481 | orchestrator | 2026-01-03 00:47:55 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:55.234975 | orchestrator | 2026-01-03 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:58.280914 | orchestrator | 2026-01-03 00:47:58 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:47:58.283172 | orchestrator | 2026-01-03 00:47:58 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:47:58.285569 | orchestrator | 2026-01-03 00:47:58 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:47:58.285643 | orchestrator | 2026-01-03 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:01.327173 | orchestrator | 2026-01-03 00:48:01 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:01.331596 | orchestrator | 2026-01-03 00:48:01 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:48:01.333052 | orchestrator | 2026-01-03 00:48:01 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:01.333116 | orchestrator | 2026-01-03 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:04.369838 | orchestrator | 2026-01-03 00:48:04 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:04.371969 | orchestrator | 2026-01-03 00:48:04 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:48:04.376210 | orchestrator | 2026-01-03 00:48:04 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:04.376274 | orchestrator | 2026-01-03 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:07.417830 | orchestrator | 2026-01-03 00:48:07 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:07.420091 | orchestrator | 2026-01-03 00:48:07 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:48:07.421790 | orchestrator | 2026-01-03 00:48:07 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:07.421852 | orchestrator | 2026-01-03 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:10.466994 | orchestrator | 2026-01-03 00:48:10 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:10.468201 | orchestrator | 2026-01-03 00:48:10 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state STARTED 2026-01-03 00:48:10.470463 | orchestrator | 2026-01-03 00:48:10 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:10.470833 | orchestrator | 2026-01-03 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:13.503315 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:13.505440 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:13.508123 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task cabfbbce-b39f-4601-8589-431e8ebdb8b4 is in state SUCCESS 2026-01-03 00:48:13.509829 | orchestrator | 2026-01-03 00:48:13.509886 | orchestrator | 2026-01-03 00:48:13.509894 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-03 00:48:13.509900 | orchestrator | 2026-01-03 00:48:13.509905 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-03 00:48:13.509910 | orchestrator | Saturday 03 January 2026 00:45:34 +0000 (0:00:00.212) 0:00:00.212 ****** 2026-01-03 00:48:13.509917 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:48:13.509986 | orchestrator | 2026-01-03 00:48:13.509993 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-03 00:48:13.509998 | orchestrator | Saturday 03 January 2026 00:45:35 +0000 (0:00:01.096) 0:00:01.309 ****** 2026-01-03 00:48:13.510005 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510011 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510059 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510065 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510071 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510076 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510083 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510089 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510094 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510099 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510106 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510112 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510117 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:48:13.510123 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510128 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510133 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510141 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510147 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510152 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:48:13.510157 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510163 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:48:13.510167 | orchestrator | 2026-01-03 00:48:13.510173 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-03 00:48:13.510179 | orchestrator | Saturday 03 January 2026 00:45:39 +0000 (0:00:04.121) 0:00:05.430 ****** 2026-01-03 00:48:13.510184 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:48:13.510191 | orchestrator | 2026-01-03 00:48:13.510196 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-03 00:48:13.510202 | orchestrator | Saturday 03 January 2026 00:45:40 +0000 (0:00:01.278) 0:00:06.709 ****** 2026-01-03 00:48:13.510219 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510271 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.510332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510339 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.510416 | orchestrator | 2026-01-03 00:48:13.510421 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-03 00:48:13.510427 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:05.241) 0:00:11.950 ****** 2026-01-03 00:48:13.510432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510468 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510473 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.510480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510644 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.510652 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.510660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510726 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.510732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510761 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.510767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510789 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.510795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510801 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.510807 | orchestrator | 2026-01-03 00:48:13.510813 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-03 00:48:13.510820 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:02.536) 0:00:14.487 ****** 2026-01-03 00:48:13.510830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510866 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.510873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510890 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.510896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.510928 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.510933 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.510942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.510948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.511625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.511662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511690 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.511696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511706 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.511716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.511721 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.511726 | orchestrator | 2026-01-03 00:48:13.511731 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-03 00:48:13.511736 | orchestrator | Saturday 03 January 2026 00:45:52 +0000 (0:00:03.939) 0:00:18.426 ****** 2026-01-03 00:48:13.511741 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.511746 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.511752 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.511757 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.511763 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.511809 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.511814 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.511817 | orchestrator | 2026-01-03 00:48:13.511820 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-03 00:48:13.511823 | orchestrator | Saturday 03 January 2026 00:45:53 +0000 (0:00:01.280) 0:00:19.707 ****** 2026-01-03 00:48:13.511827 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.511830 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.511833 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.511836 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.511839 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.511842 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.511845 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.511853 | orchestrator | 2026-01-03 00:48:13.511857 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-03 00:48:13.511860 | orchestrator | Saturday 03 January 2026 00:45:54 +0000 (0:00:01.191) 0:00:20.899 ****** 2026-01-03 00:48:13.511863 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.511866 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.511869 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.511872 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.511875 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.511878 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.511881 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.511884 | orchestrator | 2026-01-03 00:48:13.511887 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-03 00:48:13.511890 | orchestrator | Saturday 03 January 2026 00:45:56 +0000 (0:00:01.492) 0:00:22.392 ****** 2026-01-03 00:48:13.511893 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.511897 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.511900 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.511903 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.511906 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.511909 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.511912 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.511915 | orchestrator | 2026-01-03 00:48:13.511918 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-03 00:48:13.511921 | orchestrator | Saturday 03 January 2026 00:45:59 +0000 (0:00:02.933) 0:00:25.326 ****** 2026-01-03 00:48:13.511925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511932 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.511964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511976 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.511997 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512026 | orchestrator | 2026-01-03 00:48:13.512029 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-03 00:48:13.512032 | orchestrator | Saturday 03 January 2026 00:46:03 +0000 (0:00:04.229) 0:00:29.555 ****** 2026-01-03 00:48:13.512035 | orchestrator | [WARNING]: Skipped 2026-01-03 00:48:13.512040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-03 00:48:13.512043 | orchestrator | to this access issue: 2026-01-03 00:48:13.512047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-03 00:48:13.512050 | orchestrator | directory 2026-01-03 00:48:13.512053 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:48:13.512056 | orchestrator | 2026-01-03 00:48:13.512059 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-03 00:48:13.512062 | orchestrator | Saturday 03 January 2026 00:46:04 +0000 (0:00:01.075) 0:00:30.630 ****** 2026-01-03 00:48:13.512065 | orchestrator | [WARNING]: Skipped 2026-01-03 00:48:13.512068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-03 00:48:13.512071 | orchestrator | to this access issue: 2026-01-03 00:48:13.512074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-03 00:48:13.512077 | orchestrator | directory 2026-01-03 00:48:13.512080 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:48:13.512083 | orchestrator | 2026-01-03 00:48:13.512086 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-03 00:48:13.512089 | orchestrator | Saturday 03 January 2026 00:46:05 +0000 (0:00:00.991) 0:00:31.621 ****** 2026-01-03 00:48:13.512092 | orchestrator | [WARNING]: Skipped 2026-01-03 00:48:13.512095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-03 00:48:13.512098 | orchestrator | to this access issue: 2026-01-03 00:48:13.512101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-03 00:48:13.512104 | orchestrator | directory 2026-01-03 00:48:13.512107 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:48:13.512111 | orchestrator | 2026-01-03 00:48:13.512114 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-03 00:48:13.512117 | orchestrator | Saturday 03 January 2026 00:46:06 +0000 (0:00:00.849) 0:00:32.470 ****** 2026-01-03 00:48:13.512120 | orchestrator | [WARNING]: Skipped 2026-01-03 00:48:13.512123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-03 00:48:13.512126 | orchestrator | to this access issue: 2026-01-03 00:48:13.512129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-03 00:48:13.512132 | orchestrator | directory 2026-01-03 00:48:13.512135 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:48:13.512138 | orchestrator | 2026-01-03 00:48:13.512141 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-03 00:48:13.512144 | orchestrator | Saturday 03 January 2026 00:46:07 +0000 (0:00:00.673) 0:00:33.144 ****** 2026-01-03 00:48:13.512147 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.512153 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.512156 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.512159 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.512162 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.512165 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.512168 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.512171 | orchestrator | 2026-01-03 00:48:13.512174 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-03 00:48:13.512177 | orchestrator | Saturday 03 January 2026 00:46:12 +0000 (0:00:05.252) 0:00:38.396 ****** 2026-01-03 00:48:13.512180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512184 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512193 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512196 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512201 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:48:13.512204 | orchestrator | 2026-01-03 00:48:13.512207 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-03 00:48:13.512210 | orchestrator | Saturday 03 January 2026 00:46:14 +0000 (0:00:02.106) 0:00:40.503 ****** 2026-01-03 00:48:13.512213 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.512216 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.512219 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.512223 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.512225 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.512229 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.512232 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.512235 | orchestrator | 2026-01-03 00:48:13.512238 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-03 00:48:13.512241 | orchestrator | Saturday 03 January 2026 00:46:17 +0000 (0:00:02.665) 0:00:43.168 ****** 2026-01-03 00:48:13.512247 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512254 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512262 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512266 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512269 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512279 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512286 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512295 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512302 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512308 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512317 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512326 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512329 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512332 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512335 | orchestrator | 2026-01-03 00:48:13.512339 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-03 00:48:13.512342 | orchestrator | Saturday 03 January 2026 00:46:19 +0000 (0:00:02.374) 0:00:45.542 ****** 2026-01-03 00:48:13.512345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512354 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512357 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512360 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512363 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:48:13.512366 | orchestrator | 2026-01-03 00:48:13.512371 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-03 00:48:13.512374 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:02.328) 0:00:47.871 ****** 2026-01-03 00:48:13.512378 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512381 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512384 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512387 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512390 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512393 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512396 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:48:13.512399 | orchestrator | 2026-01-03 00:48:13.512404 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-03 00:48:13.512410 | orchestrator | Saturday 03 January 2026 00:46:23 +0000 (0:00:02.190) 0:00:50.062 ****** 2026-01-03 00:48:13.512413 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:48:13.512443 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512478 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:48:13.512536 | orchestrator | 2026-01-03 00:48:13.512540 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-03 00:48:13.512544 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:03.379) 0:00:53.441 ****** 2026-01-03 00:48:13.512547 | orchestrator | changed: [testbed-manager] => { 2026-01-03 00:48:13.512551 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512555 | orchestrator | } 2026-01-03 00:48:13.512558 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:48:13.512562 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512565 | orchestrator | } 2026-01-03 00:48:13.512569 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:48:13.512576 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512579 | orchestrator | } 2026-01-03 00:48:13.512585 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:48:13.512589 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512592 | orchestrator | } 2026-01-03 00:48:13.512596 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 00:48:13.512599 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512603 | orchestrator | } 2026-01-03 00:48:13.512607 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 00:48:13.512610 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512614 | orchestrator | } 2026-01-03 00:48:13.512617 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 00:48:13.512621 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:13.512624 | orchestrator | } 2026-01-03 00:48:13.512628 | orchestrator | 2026-01-03 00:48:13.512631 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:48:13.512635 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:00.998) 0:00:54.440 ****** 2026-01-03 00:48:13.512644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512648 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512655 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:48:13.512659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512673 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:13.512680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512695 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:13.512698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512710 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:13.512713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512732 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:48:13.512739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512750 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:48:13.512754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:48:13.512758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:48:13.512768 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:48:13.512772 | orchestrator | 2026-01-03 00:48:13.512775 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-03 00:48:13.512779 | orchestrator | Saturday 03 January 2026 00:46:29 +0000 (0:00:01.366) 0:00:55.807 ****** 2026-01-03 00:48:13.512783 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.512786 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.512790 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.512793 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.512797 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.512800 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.512804 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.512808 | orchestrator | 2026-01-03 00:48:13.512813 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-03 00:48:13.512822 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:01.568) 0:00:57.376 ****** 2026-01-03 00:48:13.512827 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.512832 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.512837 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.512842 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.512847 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.512851 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.512857 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.512862 | orchestrator | 2026-01-03 00:48:13.512867 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512872 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:01.039) 0:00:58.415 ****** 2026-01-03 00:48:13.512878 | orchestrator | 2026-01-03 00:48:13.512885 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512890 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.068) 0:00:58.484 ****** 2026-01-03 00:48:13.512895 | orchestrator | 2026-01-03 00:48:13.512900 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512906 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.068) 0:00:58.552 ****** 2026-01-03 00:48:13.512910 | orchestrator | 2026-01-03 00:48:13.512918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512923 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.217) 0:00:58.769 ****** 2026-01-03 00:48:13.512928 | orchestrator | 2026-01-03 00:48:13.512932 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512937 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.063) 0:00:58.833 ****** 2026-01-03 00:48:13.512942 | orchestrator | 2026-01-03 00:48:13.512949 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512954 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.059) 0:00:58.892 ****** 2026-01-03 00:48:13.512958 | orchestrator | 2026-01-03 00:48:13.512963 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:48:13.512968 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.063) 0:00:58.956 ****** 2026-01-03 00:48:13.512973 | orchestrator | 2026-01-03 00:48:13.512978 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-03 00:48:13.512989 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.085) 0:00:59.041 ****** 2026-01-03 00:48:13.512995 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.513000 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.513006 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.513011 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.513016 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.513022 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.513026 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.513031 | orchestrator | 2026-01-03 00:48:13.513037 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-03 00:48:13.513041 | orchestrator | Saturday 03 January 2026 00:47:04 +0000 (0:00:31.710) 0:01:30.752 ****** 2026-01-03 00:48:13.513046 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.513051 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.513055 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.513060 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.513065 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.513070 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.513075 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.513080 | orchestrator | 2026-01-03 00:48:13.513086 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-03 00:48:13.513093 | orchestrator | Saturday 03 January 2026 00:47:58 +0000 (0:00:53.875) 0:02:24.627 ****** 2026-01-03 00:48:13.513098 | orchestrator | ok: [testbed-manager] 2026-01-03 00:48:13.513104 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:48:13.513111 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:48:13.513116 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:48:13.513122 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:48:13.513128 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:48:13.513134 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:48:13.513140 | orchestrator | 2026-01-03 00:48:13.513146 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-03 00:48:13.513152 | orchestrator | Saturday 03 January 2026 00:48:00 +0000 (0:00:02.240) 0:02:26.868 ****** 2026-01-03 00:48:13.513158 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:13.513164 | orchestrator | changed: [testbed-manager] 2026-01-03 00:48:13.513169 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:13.513174 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:48:13.513179 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:13.513186 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:48:13.513275 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:48:13.513282 | orchestrator | 2026-01-03 00:48:13.513288 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:48:13.513295 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513305 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513310 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513316 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513320 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513325 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513331 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:48:13.513344 | orchestrator | 2026-01-03 00:48:13.513349 | orchestrator | 2026-01-03 00:48:13.513354 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:48:13.513360 | orchestrator | Saturday 03 January 2026 00:48:11 +0000 (0:00:10.585) 0:02:37.453 ****** 2026-01-03 00:48:13.513365 | orchestrator | =============================================================================== 2026-01-03 00:48:13.513370 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 53.88s 2026-01-03 00:48:13.513376 | orchestrator | common : Restart fluentd container ------------------------------------- 31.71s 2026-01-03 00:48:13.513381 | orchestrator | common : Restart cron container ---------------------------------------- 10.59s 2026-01-03 00:48:13.513386 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.25s 2026-01-03 00:48:13.513398 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.24s 2026-01-03 00:48:13.513403 | orchestrator | common : Copying over config.json files for services -------------------- 4.23s 2026-01-03 00:48:13.513408 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.12s 2026-01-03 00:48:13.513413 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.94s 2026-01-03 00:48:13.513427 | orchestrator | service-check-containers : common | Check containers -------------------- 3.38s 2026-01-03 00:48:13.513437 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.93s 2026-01-03 00:48:13.513443 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.67s 2026-01-03 00:48:13.513454 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.54s 2026-01-03 00:48:13.513459 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.37s 2026-01-03 00:48:13.513465 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.33s 2026-01-03 00:48:13.513469 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2026-01-03 00:48:13.513474 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.19s 2026-01-03 00:48:13.513479 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.11s 2026-01-03 00:48:13.513485 | orchestrator | common : Creating log volume -------------------------------------------- 1.57s 2026-01-03 00:48:13.513491 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.49s 2026-01-03 00:48:13.513522 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.37s 2026-01-03 00:48:13.513531 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:13.513538 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:13.513543 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:13.513958 | orchestrator | 2026-01-03 00:48:13 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:13.514110 | orchestrator | 2026-01-03 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:16.539491 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:16.540099 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:16.544776 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:16.545097 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:16.545748 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:16.546371 | orchestrator | 2026-01-03 00:48:16 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:16.546394 | orchestrator | 2026-01-03 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:19.608570 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:19.608632 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:19.608640 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:19.608646 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:19.608652 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:19.608668 | orchestrator | 2026-01-03 00:48:19 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:19.608674 | orchestrator | 2026-01-03 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:22.605722 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:22.605788 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:22.605796 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:22.606432 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:22.606711 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:22.608601 | orchestrator | 2026-01-03 00:48:22 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:22.608650 | orchestrator | 2026-01-03 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:25.653748 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:25.653820 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:25.654371 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:25.655169 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:25.658629 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:25.659017 | orchestrator | 2026-01-03 00:48:25 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:25.659054 | orchestrator | 2026-01-03 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:28.683309 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:28.683976 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:28.684732 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:28.686591 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:28.687544 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:28.688247 | orchestrator | 2026-01-03 00:48:28 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:28.688389 | orchestrator | 2026-01-03 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:31.714898 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:31.716788 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:31.718571 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:31.719921 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:31.722080 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state STARTED 2026-01-03 00:48:31.724374 | orchestrator | 2026-01-03 00:48:31 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:31.724422 | orchestrator | 2026-01-03 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:34.758855 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:34.759636 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:34.761546 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:34.762177 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:34.764677 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:34.765330 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task 3ede228b-fcef-4a88-bcb7-ff78678b8564 is in state SUCCESS 2026-01-03 00:48:34.765565 | orchestrator | 2026-01-03 00:48:34.765582 | orchestrator | 2026-01-03 00:48:34.765586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:48:34.765590 | orchestrator | 2026-01-03 00:48:34.765593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:48:34.765597 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.283) 0:00:00.283 ****** 2026-01-03 00:48:34.765601 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:48:34.765604 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:48:34.765608 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:48:34.765611 | orchestrator | 2026-01-03 00:48:34.765614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:48:34.765617 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.273) 0:00:00.556 ****** 2026-01-03 00:48:34.765621 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-03 00:48:34.765625 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-03 00:48:34.765628 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-03 00:48:34.765631 | orchestrator | 2026-01-03 00:48:34.765634 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-03 00:48:34.765637 | orchestrator | 2026-01-03 00:48:34.765640 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-03 00:48:34.765643 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.351) 0:00:00.908 ****** 2026-01-03 00:48:34.765646 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:48:34.765650 | orchestrator | 2026-01-03 00:48:34.765653 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-03 00:48:34.765656 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.665) 0:00:01.573 ****** 2026-01-03 00:48:34.765671 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-03 00:48:34.765675 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-03 00:48:34.765678 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-03 00:48:34.765681 | orchestrator | 2026-01-03 00:48:34.765684 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-03 00:48:34.765687 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:00.744) 0:00:02.318 ****** 2026-01-03 00:48:34.765690 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-03 00:48:34.765693 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-03 00:48:34.765696 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-03 00:48:34.765699 | orchestrator | 2026-01-03 00:48:34.765702 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-03 00:48:34.765705 | orchestrator | Saturday 03 January 2026 00:48:20 +0000 (0:00:01.594) 0:00:03.912 ****** 2026-01-03 00:48:34.765711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:48:34.765716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:48:34.765729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:48:34.765732 | orchestrator | 2026-01-03 00:48:34.765738 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-03 00:48:34.765745 | orchestrator | Saturday 03 January 2026 00:48:21 +0000 (0:00:01.259) 0:00:05.171 ****** 2026-01-03 00:48:34.765750 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:48:34.765756 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:34.765760 | orchestrator | } 2026-01-03 00:48:34.765765 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:48:34.765771 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:34.765776 | orchestrator | } 2026-01-03 00:48:34.765781 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:48:34.765786 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:34.765796 | orchestrator | } 2026-01-03 00:48:34.765801 | orchestrator | 2026-01-03 00:48:34.765806 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:48:34.765812 | orchestrator | Saturday 03 January 2026 00:48:21 +0000 (0:00:00.552) 0:00:05.724 ****** 2026-01-03 00:48:34.765818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:48:34.765823 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:34.765829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:48:34.765834 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:34.765840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:48:34.765845 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:34.765850 | orchestrator | 2026-01-03 00:48:34.765855 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-03 00:48:34.765860 | orchestrator | Saturday 03 January 2026 00:48:24 +0000 (0:00:02.516) 0:00:08.240 ****** 2026-01-03 00:48:34.765865 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:34.765870 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:34.765875 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:34.765880 | orchestrator | 2026-01-03 00:48:34.765885 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:48:34.765891 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:34.765896 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:34.765901 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:34.765906 | orchestrator | 2026-01-03 00:48:34.765911 | orchestrator | 2026-01-03 00:48:34.765917 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:48:34.765926 | orchestrator | Saturday 03 January 2026 00:48:32 +0000 (0:00:08.378) 0:00:16.618 ****** 2026-01-03 00:48:34.765940 | orchestrator | =============================================================================== 2026-01-03 00:48:34.765945 | orchestrator | memcached : Restart memcached container --------------------------------- 8.38s 2026-01-03 00:48:34.765950 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.52s 2026-01-03 00:48:34.765955 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.59s 2026-01-03 00:48:34.765960 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.26s 2026-01-03 00:48:34.765965 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.74s 2026-01-03 00:48:34.765970 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.67s 2026-01-03 00:48:34.765975 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.55s 2026-01-03 00:48:34.765980 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-01-03 00:48:34.765985 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-01-03 00:48:34.766132 | orchestrator | 2026-01-03 00:48:34 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:34.766184 | orchestrator | 2026-01-03 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:37.792154 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:37.792220 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:37.792228 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:37.792234 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:37.793530 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:37.794235 | orchestrator | 2026-01-03 00:48:37 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:37.794713 | orchestrator | 2026-01-03 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:40.814047 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:40.814364 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:40.814974 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:40.815689 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:40.817723 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:40.819354 | orchestrator | 2026-01-03 00:48:40 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state STARTED 2026-01-03 00:48:40.819387 | orchestrator | 2026-01-03 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:43.850463 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:43.850711 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:43.854011 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:43.854588 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:43.855323 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:43.859312 | orchestrator | 2026-01-03 00:48:43 | INFO  | Task 26712c7f-a3ea-4742-bc11-3e2cc9773b0b is in state SUCCESS 2026-01-03 00:48:43.859354 | orchestrator | 2026-01-03 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:43.860441 | orchestrator | 2026-01-03 00:48:43.860521 | orchestrator | 2026-01-03 00:48:43.860535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:48:43.860546 | orchestrator | 2026-01-03 00:48:43.860556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:48:43.860567 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.291) 0:00:00.291 ****** 2026-01-03 00:48:43.860576 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:48:43.860587 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:48:43.860596 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:48:43.860606 | orchestrator | 2026-01-03 00:48:43.860616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:48:43.860626 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.383) 0:00:00.675 ****** 2026-01-03 00:48:43.860636 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-03 00:48:43.860646 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-03 00:48:43.860667 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-03 00:48:43.860677 | orchestrator | 2026-01-03 00:48:43.860687 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-03 00:48:43.860697 | orchestrator | 2026-01-03 00:48:43.860707 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-03 00:48:43.860716 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.404) 0:00:01.079 ****** 2026-01-03 00:48:43.860725 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:48:43.860736 | orchestrator | 2026-01-03 00:48:43.860746 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-03 00:48:43.860756 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:00.576) 0:00:01.656 ****** 2026-01-03 00:48:43.860769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860886 | orchestrator | 2026-01-03 00:48:43.860897 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-03 00:48:43.860907 | orchestrator | Saturday 03 January 2026 00:48:19 +0000 (0:00:01.186) 0:00:02.842 ****** 2026-01-03 00:48:43.860919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.860983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861010 | orchestrator | 2026-01-03 00:48:43.861024 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-03 00:48:43.861034 | orchestrator | Saturday 03 January 2026 00:48:21 +0000 (0:00:02.582) 0:00:05.424 ****** 2026-01-03 00:48:43.861045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861118 | orchestrator | 2026-01-03 00:48:43.861128 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-03 00:48:43.861137 | orchestrator | Saturday 03 January 2026 00:48:25 +0000 (0:00:03.574) 0:00:08.999 ****** 2026-01-03 00:48:43.861151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:48:43.861226 | orchestrator | 2026-01-03 00:48:43.861236 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-03 00:48:43.861247 | orchestrator | Saturday 03 January 2026 00:48:27 +0000 (0:00:01.822) 0:00:10.821 ****** 2026-01-03 00:48:43.861257 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:48:43.861267 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:43.861277 | orchestrator | } 2026-01-03 00:48:43.861287 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:48:43.861308 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:43.861318 | orchestrator | } 2026-01-03 00:48:43.861328 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:48:43.861338 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:48:43.861352 | orchestrator | } 2026-01-03 00:48:43.861362 | orchestrator | 2026-01-03 00:48:43.861371 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:48:43.861380 | orchestrator | Saturday 03 January 2026 00:48:27 +0000 (0:00:00.561) 0:00:11.383 ****** 2026-01-03 00:48:43.861390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861439 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:48:43.861449 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:48:43.861459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-03 00:48:43.861504 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:48:43.861515 | orchestrator | 2026-01-03 00:48:43.861525 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:48:43.861535 | orchestrator | Saturday 03 January 2026 00:48:28 +0000 (0:00:01.074) 0:00:12.458 ****** 2026-01-03 00:48:43.861544 | orchestrator | 2026-01-03 00:48:43.861554 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:48:43.861564 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:00.075) 0:00:12.534 ****** 2026-01-03 00:48:43.861573 | orchestrator | 2026-01-03 00:48:43.861583 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:48:43.861593 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:00.075) 0:00:12.609 ****** 2026-01-03 00:48:43.861607 | orchestrator | 2026-01-03 00:48:43.861617 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-03 00:48:43.861627 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:00.129) 0:00:12.738 ****** 2026-01-03 00:48:43.861636 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:43.861646 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:43.861656 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:43.861665 | orchestrator | 2026-01-03 00:48:43.861675 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-03 00:48:43.861685 | orchestrator | Saturday 03 January 2026 00:48:37 +0000 (0:00:08.120) 0:00:20.859 ****** 2026-01-03 00:48:43.861696 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:48:43.861705 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:48:43.861715 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:48:43.861725 | orchestrator | 2026-01-03 00:48:43.861735 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:48:43.861746 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:43.861762 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:43.861773 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:48:43.861783 | orchestrator | 2026-01-03 00:48:43.861792 | orchestrator | 2026-01-03 00:48:43.861802 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:48:43.861811 | orchestrator | Saturday 03 January 2026 00:48:41 +0000 (0:00:04.189) 0:00:25.049 ****** 2026-01-03 00:48:43.861821 | orchestrator | =============================================================================== 2026-01-03 00:48:43.861831 | orchestrator | redis : Restart redis container ----------------------------------------- 8.12s 2026-01-03 00:48:43.861842 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.19s 2026-01-03 00:48:43.861852 | orchestrator | redis : Copying over redis config files --------------------------------- 3.57s 2026-01-03 00:48:43.861861 | orchestrator | redis : Copying over default config.json files -------------------------- 2.58s 2026-01-03 00:48:43.861871 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.82s 2026-01-03 00:48:43.861881 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.19s 2026-01-03 00:48:43.861891 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.07s 2026-01-03 00:48:43.861901 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2026-01-03 00:48:43.861910 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.56s 2026-01-03 00:48:43.861919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-01-03 00:48:43.861929 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-03 00:48:43.861939 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2026-01-03 00:48:46.885626 | orchestrator | 2026-01-03 00:48:46 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:46.885743 | orchestrator | 2026-01-03 00:48:46 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:46.886560 | orchestrator | 2026-01-03 00:48:46 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:46.887038 | orchestrator | 2026-01-03 00:48:46 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:46.887662 | orchestrator | 2026-01-03 00:48:46 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:46.887705 | orchestrator | 2026-01-03 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:49.937081 | orchestrator | 2026-01-03 00:48:49 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:49.937234 | orchestrator | 2026-01-03 00:48:49 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:49.937788 | orchestrator | 2026-01-03 00:48:49 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:49.938305 | orchestrator | 2026-01-03 00:48:49 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:49.938772 | orchestrator | 2026-01-03 00:48:49 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:49.938916 | orchestrator | 2026-01-03 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:53.005908 | orchestrator | 2026-01-03 00:48:53 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:53.007071 | orchestrator | 2026-01-03 00:48:53 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:53.015595 | orchestrator | 2026-01-03 00:48:53 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:53.015675 | orchestrator | 2026-01-03 00:48:53 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:53.015685 | orchestrator | 2026-01-03 00:48:53 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:53.015693 | orchestrator | 2026-01-03 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:56.036375 | orchestrator | 2026-01-03 00:48:56 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:56.036812 | orchestrator | 2026-01-03 00:48:56 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:56.037692 | orchestrator | 2026-01-03 00:48:56 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:56.039490 | orchestrator | 2026-01-03 00:48:56 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:56.040308 | orchestrator | 2026-01-03 00:48:56 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:56.040343 | orchestrator | 2026-01-03 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:59.073339 | orchestrator | 2026-01-03 00:48:59 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:48:59.074278 | orchestrator | 2026-01-03 00:48:59 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:48:59.074981 | orchestrator | 2026-01-03 00:48:59 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:48:59.077395 | orchestrator | 2026-01-03 00:48:59 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:48:59.078828 | orchestrator | 2026-01-03 00:48:59 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:48:59.078868 | orchestrator | 2026-01-03 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:02.117501 | orchestrator | 2026-01-03 00:49:02 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:02.119070 | orchestrator | 2026-01-03 00:49:02 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:02.120838 | orchestrator | 2026-01-03 00:49:02 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:02.122658 | orchestrator | 2026-01-03 00:49:02 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:02.123798 | orchestrator | 2026-01-03 00:49:02 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:02.123931 | orchestrator | 2026-01-03 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:05.180708 | orchestrator | 2026-01-03 00:49:05 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:05.182661 | orchestrator | 2026-01-03 00:49:05 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:05.184370 | orchestrator | 2026-01-03 00:49:05 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:05.186761 | orchestrator | 2026-01-03 00:49:05 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:05.188734 | orchestrator | 2026-01-03 00:49:05 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:05.188785 | orchestrator | 2026-01-03 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:08.219313 | orchestrator | 2026-01-03 00:49:08 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:08.219394 | orchestrator | 2026-01-03 00:49:08 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:08.219780 | orchestrator | 2026-01-03 00:49:08 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:08.226508 | orchestrator | 2026-01-03 00:49:08 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:08.229674 | orchestrator | 2026-01-03 00:49:08 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:08.229717 | orchestrator | 2026-01-03 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:11.253598 | orchestrator | 2026-01-03 00:49:11 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:11.254655 | orchestrator | 2026-01-03 00:49:11 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:11.256985 | orchestrator | 2026-01-03 00:49:11 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:11.260025 | orchestrator | 2026-01-03 00:49:11 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:11.262480 | orchestrator | 2026-01-03 00:49:11 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:11.262837 | orchestrator | 2026-01-03 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:14.287334 | orchestrator | 2026-01-03 00:49:14 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:14.287691 | orchestrator | 2026-01-03 00:49:14 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:14.288324 | orchestrator | 2026-01-03 00:49:14 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:14.288995 | orchestrator | 2026-01-03 00:49:14 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:14.289635 | orchestrator | 2026-01-03 00:49:14 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:14.289659 | orchestrator | 2026-01-03 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:17.324187 | orchestrator | 2026-01-03 00:49:17 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:17.324612 | orchestrator | 2026-01-03 00:49:17 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:17.325319 | orchestrator | 2026-01-03 00:49:17 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:17.326133 | orchestrator | 2026-01-03 00:49:17 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state STARTED 2026-01-03 00:49:17.326679 | orchestrator | 2026-01-03 00:49:17 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:17.326770 | orchestrator | 2026-01-03 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:20.354398 | orchestrator | 2026-01-03 00:49:20 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:20.355898 | orchestrator | 2026-01-03 00:49:20 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:20.356802 | orchestrator | 2026-01-03 00:49:20 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:20.358078 | orchestrator | 2026-01-03 00:49:20 | INFO  | Task 823d1df8-44cd-473d-bfbf-08e3d050bfa9 is in state SUCCESS 2026-01-03 00:49:20.358108 | orchestrator | 2026-01-03 00:49:20.359983 | orchestrator | 2026-01-03 00:49:20.360017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:49:20.360025 | orchestrator | 2026-01-03 00:49:20.360033 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:49:20.360039 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.319) 0:00:00.319 ****** 2026-01-03 00:49:20.360046 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:20.360054 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:20.360061 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:20.360067 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:20.360074 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:20.360080 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:20.360086 | orchestrator | 2026-01-03 00:49:20.360093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:49:20.360100 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.689) 0:00:01.009 ****** 2026-01-03 00:49:20.360108 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360115 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360122 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360128 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360135 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360141 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:49:20.360147 | orchestrator | 2026-01-03 00:49:20.360154 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-03 00:49:20.360160 | orchestrator | 2026-01-03 00:49:20.360166 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-03 00:49:20.360173 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:00.737) 0:00:01.747 ****** 2026-01-03 00:49:20.360185 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:49:20.360192 | orchestrator | 2026-01-03 00:49:20.360198 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-03 00:49:20.360204 | orchestrator | Saturday 03 January 2026 00:48:19 +0000 (0:00:01.243) 0:00:02.990 ****** 2026-01-03 00:49:20.360211 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-03 00:49:20.360218 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-03 00:49:20.360224 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-03 00:49:20.360244 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-03 00:49:20.360251 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-03 00:49:20.360257 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-03 00:49:20.360263 | orchestrator | 2026-01-03 00:49:20.360269 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-03 00:49:20.360276 | orchestrator | Saturday 03 January 2026 00:48:21 +0000 (0:00:01.337) 0:00:04.329 ****** 2026-01-03 00:49:20.360282 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-03 00:49:20.360289 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-03 00:49:20.360295 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-03 00:49:20.360302 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-03 00:49:20.360308 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-03 00:49:20.360315 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-03 00:49:20.360321 | orchestrator | 2026-01-03 00:49:20.360327 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-03 00:49:20.360334 | orchestrator | Saturday 03 January 2026 00:48:22 +0000 (0:00:01.705) 0:00:06.034 ****** 2026-01-03 00:49:20.360340 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-03 00:49:20.360346 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:20.360352 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-03 00:49:20.360359 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-03 00:49:20.360365 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:20.360371 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:20.360377 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-03 00:49:20.360384 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-03 00:49:20.360390 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.360396 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.360403 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-03 00:49:20.360409 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.360445 | orchestrator | 2026-01-03 00:49:20.360451 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-03 00:49:20.360458 | orchestrator | Saturday 03 January 2026 00:48:24 +0000 (0:00:01.590) 0:00:07.625 ****** 2026-01-03 00:49:20.360464 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:20.360470 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:20.360477 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:20.360483 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.360491 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.360498 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.360504 | orchestrator | 2026-01-03 00:49:20.360510 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-03 00:49:20.360517 | orchestrator | Saturday 03 January 2026 00:48:25 +0000 (0:00:01.254) 0:00:08.880 ****** 2026-01-03 00:49:20.360534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360643 | orchestrator | 2026-01-03 00:49:20.360650 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-03 00:49:20.360656 | orchestrator | Saturday 03 January 2026 00:48:27 +0000 (0:00:01.437) 0:00:10.317 ****** 2026-01-03 00:49:20.360663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360776 | orchestrator | 2026-01-03 00:49:20.360782 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-03 00:49:20.360789 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:03.323) 0:00:13.641 ****** 2026-01-03 00:49:20.360796 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:20.360804 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:20.360811 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:20.360818 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.360824 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.360831 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.360837 | orchestrator | 2026-01-03 00:49:20.360843 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-03 00:49:20.360850 | orchestrator | Saturday 03 January 2026 00:48:31 +0000 (0:00:01.165) 0:00:14.806 ****** 2026-01-03 00:49:20.360859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:49:20.360971 | orchestrator | 2026-01-03 00:49:20.360977 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-03 00:49:20.360984 | orchestrator | Saturday 03 January 2026 00:48:33 +0000 (0:00:02.238) 0:00:17.044 ****** 2026-01-03 00:49:20.360990 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:49:20.361000 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361007 | orchestrator | } 2026-01-03 00:49:20.361014 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:49:20.361020 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361026 | orchestrator | } 2026-01-03 00:49:20.361032 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:49:20.361038 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361045 | orchestrator | } 2026-01-03 00:49:20.361051 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 00:49:20.361057 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361063 | orchestrator | } 2026-01-03 00:49:20.361070 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 00:49:20.361076 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361082 | orchestrator | } 2026-01-03 00:49:20.361088 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 00:49:20.361095 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:49:20.361101 | orchestrator | } 2026-01-03 00:49:20.361107 | orchestrator | 2026-01-03 00:49:20.361113 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:49:20.361120 | orchestrator | Saturday 03 January 2026 00:48:35 +0000 (0:00:01.413) 0:00:18.458 ****** 2026-01-03 00:49:20.361126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361143 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:20.361153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361167 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:20.361174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361188 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:20.361195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361559 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.361572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361586 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.361592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-03 00:49:20.361599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-03 00:49:20.361610 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.361617 | orchestrator | 2026-01-03 00:49:20.361623 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361630 | orchestrator | Saturday 03 January 2026 00:48:37 +0000 (0:00:02.304) 0:00:20.762 ****** 2026-01-03 00:49:20.361637 | orchestrator | 2026-01-03 00:49:20.361643 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361649 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.458) 0:00:21.221 ****** 2026-01-03 00:49:20.361657 | orchestrator | 2026-01-03 00:49:20.361664 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361670 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.228) 0:00:21.450 ****** 2026-01-03 00:49:20.361677 | orchestrator | 2026-01-03 00:49:20.361694 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361701 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.119) 0:00:21.569 ****** 2026-01-03 00:49:20.361707 | orchestrator | 2026-01-03 00:49:20.361713 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361720 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.314) 0:00:21.884 ****** 2026-01-03 00:49:20.361726 | orchestrator | 2026-01-03 00:49:20.361733 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:49:20.361739 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.109) 0:00:21.993 ****** 2026-01-03 00:49:20.361745 | orchestrator | 2026-01-03 00:49:20.361751 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-03 00:49:20.361758 | orchestrator | Saturday 03 January 2026 00:48:39 +0000 (0:00:00.109) 0:00:22.103 ****** 2026-01-03 00:49:20.361764 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:20.361771 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:20.361778 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:20.361784 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:20.361790 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:20.361796 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:20.361803 | orchestrator | 2026-01-03 00:49:20.361810 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-03 00:49:20.361819 | orchestrator | Saturday 03 January 2026 00:48:47 +0000 (0:00:08.853) 0:00:30.957 ****** 2026-01-03 00:49:20.361826 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:20.361832 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:20.361839 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:20.361845 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:20.361851 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:20.361857 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:20.361863 | orchestrator | 2026-01-03 00:49:20.361870 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-03 00:49:20.361876 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:01.380) 0:00:32.337 ****** 2026-01-03 00:49:20.361882 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:20.361889 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:20.361895 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:20.361902 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:20.361908 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:20.361915 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:20.361921 | orchestrator | 2026-01-03 00:49:20.361927 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-03 00:49:20.361934 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:09.811) 0:00:42.149 ****** 2026-01-03 00:49:20.361946 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-03 00:49:20.361953 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-03 00:49:20.361960 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-03 00:49:20.361967 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-03 00:49:20.361973 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-03 00:49:20.361980 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-03 00:49:20.361986 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-03 00:49:20.361992 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-03 00:49:20.361999 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-03 00:49:20.362005 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-03 00:49:20.362011 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-03 00:49:20.362069 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-03 00:49:20.362076 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362084 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362091 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362097 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362104 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362111 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:49:20.362119 | orchestrator | 2026-01-03 00:49:20.362126 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-03 00:49:20.362133 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:08.224) 0:00:50.373 ****** 2026-01-03 00:49:20.362143 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-03 00:49:20.362151 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.362159 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-03 00:49:20.362166 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.362173 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-03 00:49:20.362180 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.362187 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-03 00:49:20.362194 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-03 00:49:20.362201 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-03 00:49:20.362208 | orchestrator | 2026-01-03 00:49:20.362215 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-03 00:49:20.362222 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:01.886) 0:00:52.260 ****** 2026-01-03 00:49:20.362229 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:49:20.362235 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:20.362247 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:49:20.362254 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:20.362261 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:49:20.362268 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:20.362275 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:49:20.362287 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:49:20.362294 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:49:20.362302 | orchestrator | 2026-01-03 00:49:20.362309 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-03 00:49:20.362315 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:03.273) 0:00:55.534 ****** 2026-01-03 00:49:20.362323 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:20.362331 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:20.362339 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:20.362346 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:20.362352 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:20.362359 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:20.362366 | orchestrator | 2026-01-03 00:49:20.362373 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:20.362381 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:20.362388 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:20.362395 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:20.362402 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:49:20.362409 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:49:20.362440 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:49:20.362447 | orchestrator | 2026-01-03 00:49:20.362453 | orchestrator | 2026-01-03 00:49:20.362460 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:20.362466 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:07.449) 0:01:02.983 ****** 2026-01-03 00:49:20.362472 | orchestrator | =============================================================================== 2026-01-03 00:49:20.362478 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.26s 2026-01-03 00:49:20.362485 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.85s 2026-01-03 00:49:20.362491 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.22s 2026-01-03 00:49:20.362498 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.32s 2026-01-03 00:49:20.362506 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.27s 2026-01-03 00:49:20.362513 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.30s 2026-01-03 00:49:20.362519 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.24s 2026-01-03 00:49:20.362526 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 1.89s 2026-01-03 00:49:20.362532 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.71s 2026-01-03 00:49:20.362539 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.59s 2026-01-03 00:49:20.362545 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.44s 2026-01-03 00:49:20.362556 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.41s 2026-01-03 00:49:20.362562 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.38s 2026-01-03 00:49:20.362568 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.34s 2026-01-03 00:49:20.362575 | orchestrator | module-load : Load modules ---------------------------------------------- 1.34s 2026-01-03 00:49:20.362581 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.25s 2026-01-03 00:49:20.362591 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.24s 2026-01-03 00:49:20.362597 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.17s 2026-01-03 00:49:20.362603 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-01-03 00:49:20.362610 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-01-03 00:49:20.362616 | orchestrator | 2026-01-03 00:49:20 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:20.362623 | orchestrator | 2026-01-03 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:23.384753 | orchestrator | 2026-01-03 00:49:23 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:23.387971 | orchestrator | 2026-01-03 00:49:23 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:23.388535 | orchestrator | 2026-01-03 00:49:23 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:23.389110 | orchestrator | 2026-01-03 00:49:23 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:23.389975 | orchestrator | 2026-01-03 00:49:23 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:23.390010 | orchestrator | 2026-01-03 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:26.412798 | orchestrator | 2026-01-03 00:49:26 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:26.413138 | orchestrator | 2026-01-03 00:49:26 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:26.413889 | orchestrator | 2026-01-03 00:49:26 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:26.414634 | orchestrator | 2026-01-03 00:49:26 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:26.415278 | orchestrator | 2026-01-03 00:49:26 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:26.415296 | orchestrator | 2026-01-03 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:29.445455 | orchestrator | 2026-01-03 00:49:29 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:29.445570 | orchestrator | 2026-01-03 00:49:29 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:29.447551 | orchestrator | 2026-01-03 00:49:29 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:29.449729 | orchestrator | 2026-01-03 00:49:29 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:29.450114 | orchestrator | 2026-01-03 00:49:29 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:29.450151 | orchestrator | 2026-01-03 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:32.491801 | orchestrator | 2026-01-03 00:49:32 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:32.493557 | orchestrator | 2026-01-03 00:49:32 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:32.494949 | orchestrator | 2026-01-03 00:49:32 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:32.503590 | orchestrator | 2026-01-03 00:49:32 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:32.504373 | orchestrator | 2026-01-03 00:49:32 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:32.504835 | orchestrator | 2026-01-03 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:35.544680 | orchestrator | 2026-01-03 00:49:35 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:35.597845 | orchestrator | 2026-01-03 00:49:35 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:35.597894 | orchestrator | 2026-01-03 00:49:35 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:35.597900 | orchestrator | 2026-01-03 00:49:35 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:35.597905 | orchestrator | 2026-01-03 00:49:35 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:35.597910 | orchestrator | 2026-01-03 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:38.592231 | orchestrator | 2026-01-03 00:49:38 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:38.592450 | orchestrator | 2026-01-03 00:49:38 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:38.593648 | orchestrator | 2026-01-03 00:49:38 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:38.594911 | orchestrator | 2026-01-03 00:49:38 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:38.595294 | orchestrator | 2026-01-03 00:49:38 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:38.595311 | orchestrator | 2026-01-03 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:41.631540 | orchestrator | 2026-01-03 00:49:41 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:41.631850 | orchestrator | 2026-01-03 00:49:41 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:41.632723 | orchestrator | 2026-01-03 00:49:41 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:41.633321 | orchestrator | 2026-01-03 00:49:41 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:41.635046 | orchestrator | 2026-01-03 00:49:41 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:41.635069 | orchestrator | 2026-01-03 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:44.714239 | orchestrator | 2026-01-03 00:49:44 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:44.718594 | orchestrator | 2026-01-03 00:49:44 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:44.718677 | orchestrator | 2026-01-03 00:49:44 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:44.720622 | orchestrator | 2026-01-03 00:49:44 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:44.720891 | orchestrator | 2026-01-03 00:49:44 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:44.720909 | orchestrator | 2026-01-03 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:47.838365 | orchestrator | 2026-01-03 00:49:47 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:47.839927 | orchestrator | 2026-01-03 00:49:47 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:47.840460 | orchestrator | 2026-01-03 00:49:47 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:47.843306 | orchestrator | 2026-01-03 00:49:47 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:47.843776 | orchestrator | 2026-01-03 00:49:47 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:47.843808 | orchestrator | 2026-01-03 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:50.884178 | orchestrator | 2026-01-03 00:49:50 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:50.884224 | orchestrator | 2026-01-03 00:49:50 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state STARTED 2026-01-03 00:49:50.884229 | orchestrator | 2026-01-03 00:49:50 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:50.884232 | orchestrator | 2026-01-03 00:49:50 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:50.884235 | orchestrator | 2026-01-03 00:49:50 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:50.884239 | orchestrator | 2026-01-03 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:54.059828 | orchestrator | 2026-01-03 00:49:54 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:54.062267 | orchestrator | 2026-01-03 00:49:54 | INFO  | Task d1b93c4e-8620-4a75-a5ad-c955918176c8 is in state SUCCESS 2026-01-03 00:49:54.063568 | orchestrator | 2026-01-03 00:49:54.063631 | orchestrator | 2026-01-03 00:49:54.063639 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-03 00:49:54.063646 | orchestrator | 2026-01-03 00:49:54.063652 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-03 00:49:54.063658 | orchestrator | Saturday 03 January 2026 00:45:34 +0000 (0:00:00.193) 0:00:00.193 ****** 2026-01-03 00:49:54.063664 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.063671 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.063676 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.063681 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.063687 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.063692 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.063841 | orchestrator | 2026-01-03 00:49:54.063856 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-03 00:49:54.063862 | orchestrator | Saturday 03 January 2026 00:45:35 +0000 (0:00:00.632) 0:00:00.826 ****** 2026-01-03 00:49:54.063868 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.063875 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.063881 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.063887 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.063892 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.063897 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.063903 | orchestrator | 2026-01-03 00:49:54.063908 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-03 00:49:54.063913 | orchestrator | Saturday 03 January 2026 00:45:35 +0000 (0:00:00.512) 0:00:01.338 ****** 2026-01-03 00:49:54.063919 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.063924 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.063929 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.063934 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.063939 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.063961 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.063966 | orchestrator | 2026-01-03 00:49:54.063971 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-03 00:49:54.063977 | orchestrator | Saturday 03 January 2026 00:45:36 +0000 (0:00:00.553) 0:00:01.892 ****** 2026-01-03 00:49:54.063982 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.063987 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.063992 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.063997 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.064002 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.064015 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.064026 | orchestrator | 2026-01-03 00:49:54.064031 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-03 00:49:54.064036 | orchestrator | Saturday 03 January 2026 00:45:37 +0000 (0:00:01.709) 0:00:03.601 ****** 2026-01-03 00:49:54.064042 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.064047 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.064052 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.064057 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.064062 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.064067 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.064072 | orchestrator | 2026-01-03 00:49:54.064077 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-03 00:49:54.064082 | orchestrator | Saturday 03 January 2026 00:45:39 +0000 (0:00:01.586) 0:00:05.188 ****** 2026-01-03 00:49:54.064087 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.064093 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.064098 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.064103 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.064108 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.064113 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.064118 | orchestrator | 2026-01-03 00:49:54.064123 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-03 00:49:54.064128 | orchestrator | Saturday 03 January 2026 00:45:41 +0000 (0:00:01.712) 0:00:06.900 ****** 2026-01-03 00:49:54.064133 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064138 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064143 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064148 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064153 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064159 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064164 | orchestrator | 2026-01-03 00:49:54.064169 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-03 00:49:54.064174 | orchestrator | Saturday 03 January 2026 00:45:41 +0000 (0:00:00.754) 0:00:07.655 ****** 2026-01-03 00:49:54.064179 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064184 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064189 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064194 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064199 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064204 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064209 | orchestrator | 2026-01-03 00:49:54.064214 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-03 00:49:54.064220 | orchestrator | Saturday 03 January 2026 00:45:42 +0000 (0:00:00.660) 0:00:08.315 ****** 2026-01-03 00:49:54.064225 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064230 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064235 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064240 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064245 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064255 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064260 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064265 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064271 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064276 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064292 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064297 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064302 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064308 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064313 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064318 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:49:54.064323 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:49:54.064328 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064333 | orchestrator | 2026-01-03 00:49:54.064355 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-03 00:49:54.064360 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.623) 0:00:08.939 ****** 2026-01-03 00:49:54.064366 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064371 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064376 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064380 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064383 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064386 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064389 | orchestrator | 2026-01-03 00:49:54.064393 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-03 00:49:54.064397 | orchestrator | Saturday 03 January 2026 00:45:44 +0000 (0:00:01.661) 0:00:10.600 ****** 2026-01-03 00:49:54.064400 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.064404 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.064407 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.064410 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.064414 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.064417 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.064420 | orchestrator | 2026-01-03 00:49:54.064423 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-03 00:49:54.064427 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:00.819) 0:00:11.420 ****** 2026-01-03 00:49:54.064430 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.064434 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.064437 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.064440 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.064444 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.064447 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.064450 | orchestrator | 2026-01-03 00:49:54.064454 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-03 00:49:54.064457 | orchestrator | Saturday 03 January 2026 00:45:50 +0000 (0:00:05.222) 0:00:16.643 ****** 2026-01-03 00:49:54.064460 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064464 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064468 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064472 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064476 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064480 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064484 | orchestrator | 2026-01-03 00:49:54.064601 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-03 00:49:54.064612 | orchestrator | Saturday 03 January 2026 00:45:52 +0000 (0:00:01.147) 0:00:17.791 ****** 2026-01-03 00:49:54.064621 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064625 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064629 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064633 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064637 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064641 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064646 | orchestrator | 2026-01-03 00:49:54.064650 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-03 00:49:54.064657 | orchestrator | Saturday 03 January 2026 00:45:54 +0000 (0:00:02.191) 0:00:19.982 ****** 2026-01-03 00:49:54.064662 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064666 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064670 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064674 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064678 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064682 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064686 | orchestrator | 2026-01-03 00:49:54.064691 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-03 00:49:54.064695 | orchestrator | Saturday 03 January 2026 00:45:55 +0000 (0:00:01.547) 0:00:21.529 ****** 2026-01-03 00:49:54.064699 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-03 00:49:54.064704 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-03 00:49:54.064710 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064758 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-03 00:49:54.064764 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-03 00:49:54.064769 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064775 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-03 00:49:54.064780 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-03 00:49:54.064785 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064791 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-03 00:49:54.064796 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-03 00:49:54.064802 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064807 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-03 00:49:54.064814 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-03 00:49:54.064817 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064821 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-03 00:49:54.064824 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-03 00:49:54.064827 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064832 | orchestrator | 2026-01-03 00:49:54.064838 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-03 00:49:54.064855 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:01.771) 0:00:23.301 ****** 2026-01-03 00:49:54.064860 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064864 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064870 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064875 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.064880 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.064885 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.064890 | orchestrator | 2026-01-03 00:49:54.064896 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-03 00:49:54.064901 | orchestrator | Saturday 03 January 2026 00:45:58 +0000 (0:00:00.844) 0:00:24.146 ****** 2026-01-03 00:49:54.064907 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.064919 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.064924 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.064930 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.065037 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065055 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065060 | orchestrator | 2026-01-03 00:49:54.065065 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-03 00:49:54.065070 | orchestrator | 2026-01-03 00:49:54.065075 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-03 00:49:54.065080 | orchestrator | Saturday 03 January 2026 00:45:59 +0000 (0:00:01.244) 0:00:25.390 ****** 2026-01-03 00:49:54.065085 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065093 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065098 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065103 | orchestrator | 2026-01-03 00:49:54.065108 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-03 00:49:54.065114 | orchestrator | Saturday 03 January 2026 00:46:01 +0000 (0:00:01.428) 0:00:26.818 ****** 2026-01-03 00:49:54.065119 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065125 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065130 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065135 | orchestrator | 2026-01-03 00:49:54.065140 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-03 00:49:54.065145 | orchestrator | Saturday 03 January 2026 00:46:02 +0000 (0:00:01.140) 0:00:27.959 ****** 2026-01-03 00:49:54.065150 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065155 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065160 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065166 | orchestrator | 2026-01-03 00:49:54.065171 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-03 00:49:54.065176 | orchestrator | Saturday 03 January 2026 00:46:03 +0000 (0:00:00.994) 0:00:28.953 ****** 2026-01-03 00:49:54.065181 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065186 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065192 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065197 | orchestrator | 2026-01-03 00:49:54.065202 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-03 00:49:54.065208 | orchestrator | Saturday 03 January 2026 00:46:04 +0000 (0:00:00.858) 0:00:29.812 ****** 2026-01-03 00:49:54.065213 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.065218 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065223 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065228 | orchestrator | 2026-01-03 00:49:54.065233 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-03 00:49:54.065239 | orchestrator | Saturday 03 January 2026 00:46:04 +0000 (0:00:00.414) 0:00:30.226 ****** 2026-01-03 00:49:54.065244 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065249 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.065254 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.065259 | orchestrator | 2026-01-03 00:49:54.065264 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-03 00:49:54.065270 | orchestrator | Saturday 03 January 2026 00:46:05 +0000 (0:00:01.310) 0:00:31.537 ****** 2026-01-03 00:49:54.065275 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065280 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.065285 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.065290 | orchestrator | 2026-01-03 00:49:54.065295 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-03 00:49:54.065301 | orchestrator | Saturday 03 January 2026 00:46:07 +0000 (0:00:01.306) 0:00:32.843 ****** 2026-01-03 00:49:54.065306 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:49:54.065311 | orchestrator | 2026-01-03 00:49:54.065317 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-03 00:49:54.065323 | orchestrator | Saturday 03 January 2026 00:46:07 +0000 (0:00:00.499) 0:00:33.343 ****** 2026-01-03 00:49:54.065329 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065334 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065340 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065386 | orchestrator | 2026-01-03 00:49:54.065392 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-03 00:49:54.065397 | orchestrator | Saturday 03 January 2026 00:46:10 +0000 (0:00:02.768) 0:00:36.112 ****** 2026-01-03 00:49:54.065403 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065408 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065414 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065420 | orchestrator | 2026-01-03 00:49:54.065425 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-03 00:49:54.065431 | orchestrator | Saturday 03 January 2026 00:46:10 +0000 (0:00:00.550) 0:00:36.663 ****** 2026-01-03 00:49:54.065436 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065441 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065447 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065452 | orchestrator | 2026-01-03 00:49:54.065458 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-03 00:49:54.065463 | orchestrator | Saturday 03 January 2026 00:46:11 +0000 (0:00:00.997) 0:00:37.660 ****** 2026-01-03 00:49:54.065468 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065474 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065478 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065484 | orchestrator | 2026-01-03 00:49:54.065489 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-03 00:49:54.065636 | orchestrator | Saturday 03 January 2026 00:46:13 +0000 (0:00:01.282) 0:00:38.942 ****** 2026-01-03 00:49:54.065648 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.065776 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065784 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065790 | orchestrator | 2026-01-03 00:49:54.065796 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-03 00:49:54.065802 | orchestrator | Saturday 03 January 2026 00:46:13 +0000 (0:00:00.426) 0:00:39.369 ****** 2026-01-03 00:49:54.065807 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.065813 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.065818 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.065824 | orchestrator | 2026-01-03 00:49:54.065835 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-03 00:49:54.065841 | orchestrator | Saturday 03 January 2026 00:46:13 +0000 (0:00:00.277) 0:00:39.646 ****** 2026-01-03 00:49:54.065846 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.065852 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.065857 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.065862 | orchestrator | 2026-01-03 00:49:54.065867 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-03 00:49:54.065873 | orchestrator | Saturday 03 January 2026 00:46:15 +0000 (0:00:01.368) 0:00:41.014 ****** 2026-01-03 00:49:54.065878 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065884 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065890 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065895 | orchestrator | 2026-01-03 00:49:54.065900 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-03 00:49:54.065906 | orchestrator | Saturday 03 January 2026 00:46:17 +0000 (0:00:02.708) 0:00:43.723 ****** 2026-01-03 00:49:54.065911 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.065916 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.065922 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.065928 | orchestrator | 2026-01-03 00:49:54.065933 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-03 00:49:54.065940 | orchestrator | Saturday 03 January 2026 00:46:18 +0000 (0:00:00.777) 0:00:44.501 ****** 2026-01-03 00:49:54.065946 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:49:54.065953 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:49:54.065967 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:49:54.065973 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:49:54.065979 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:49:54.065985 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:49:54.065990 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:49:54.065996 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:49:54.066001 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:49:54.066007 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:49:54.066012 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:49:54.066069 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:49:54.066076 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:49:54.066081 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:49:54.066086 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:49:54.066092 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.066098 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.066104 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.066110 | orchestrator | 2026-01-03 00:49:54.066116 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-03 00:49:54.066122 | orchestrator | Saturday 03 January 2026 00:47:12 +0000 (0:00:53.949) 0:01:38.451 ****** 2026-01-03 00:49:54.066128 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.066133 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.066138 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.066143 | orchestrator | 2026-01-03 00:49:54.066149 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-03 00:49:54.066161 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.384) 0:01:38.835 ****** 2026-01-03 00:49:54.066166 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066171 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066176 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066182 | orchestrator | 2026-01-03 00:49:54.066260 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-03 00:49:54.066266 | orchestrator | Saturday 03 January 2026 00:47:14 +0000 (0:00:01.015) 0:01:39.851 ****** 2026-01-03 00:49:54.066271 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066276 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066281 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066286 | orchestrator | 2026-01-03 00:49:54.066292 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-03 00:49:54.066302 | orchestrator | Saturday 03 January 2026 00:47:15 +0000 (0:00:01.534) 0:01:41.386 ****** 2026-01-03 00:49:54.066312 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066318 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066323 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066328 | orchestrator | 2026-01-03 00:49:54.066334 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-03 00:49:54.066603 | orchestrator | Saturday 03 January 2026 00:47:41 +0000 (0:00:25.980) 0:02:07.367 ****** 2026-01-03 00:49:54.066636 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.066643 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.066649 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.066654 | orchestrator | 2026-01-03 00:49:54.066659 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-03 00:49:54.066665 | orchestrator | Saturday 03 January 2026 00:47:42 +0000 (0:00:00.695) 0:02:08.062 ****** 2026-01-03 00:49:54.066671 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.066678 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.066683 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.066688 | orchestrator | 2026-01-03 00:49:54.066693 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-03 00:49:54.066698 | orchestrator | Saturday 03 January 2026 00:47:42 +0000 (0:00:00.665) 0:02:08.727 ****** 2026-01-03 00:49:54.066703 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066709 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066714 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066719 | orchestrator | 2026-01-03 00:49:54.066724 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-03 00:49:54.066730 | orchestrator | Saturday 03 January 2026 00:47:43 +0000 (0:00:00.683) 0:02:09.411 ****** 2026-01-03 00:49:54.066736 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.066742 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.066747 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.066753 | orchestrator | 2026-01-03 00:49:54.066758 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-03 00:49:54.066764 | orchestrator | Saturday 03 January 2026 00:47:44 +0000 (0:00:00.903) 0:02:10.315 ****** 2026-01-03 00:49:54.066769 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.066774 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.066779 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.066784 | orchestrator | 2026-01-03 00:49:54.066789 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-03 00:49:54.066795 | orchestrator | Saturday 03 January 2026 00:47:44 +0000 (0:00:00.278) 0:02:10.593 ****** 2026-01-03 00:49:54.066800 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066806 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066812 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066816 | orchestrator | 2026-01-03 00:49:54.066821 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-03 00:49:54.066826 | orchestrator | Saturday 03 January 2026 00:47:45 +0000 (0:00:00.653) 0:02:11.247 ****** 2026-01-03 00:49:54.066831 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066836 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066842 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066848 | orchestrator | 2026-01-03 00:49:54.066853 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-03 00:49:54.066858 | orchestrator | Saturday 03 January 2026 00:47:46 +0000 (0:00:00.669) 0:02:11.916 ****** 2026-01-03 00:49:54.066863 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066868 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066873 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066878 | orchestrator | 2026-01-03 00:49:54.066883 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-03 00:49:54.066888 | orchestrator | Saturday 03 January 2026 00:47:47 +0000 (0:00:01.209) 0:02:13.126 ****** 2026-01-03 00:49:54.066949 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:54.066957 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:54.066971 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:54.066976 | orchestrator | 2026-01-03 00:49:54.066982 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-03 00:49:54.066987 | orchestrator | Saturday 03 January 2026 00:47:48 +0000 (0:00:00.868) 0:02:13.994 ****** 2026-01-03 00:49:54.066993 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.066998 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.067004 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.067010 | orchestrator | 2026-01-03 00:49:54.067019 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-03 00:49:54.067024 | orchestrator | Saturday 03 January 2026 00:47:48 +0000 (0:00:00.283) 0:02:14.278 ****** 2026-01-03 00:49:54.067029 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.067035 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.067040 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.067044 | orchestrator | 2026-01-03 00:49:54.067049 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-03 00:49:54.067055 | orchestrator | Saturday 03 January 2026 00:47:48 +0000 (0:00:00.262) 0:02:14.541 ****** 2026-01-03 00:49:54.067060 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.067065 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.067069 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.067075 | orchestrator | 2026-01-03 00:49:54.067080 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-03 00:49:54.067086 | orchestrator | Saturday 03 January 2026 00:47:49 +0000 (0:00:00.838) 0:02:15.379 ****** 2026-01-03 00:49:54.067091 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.067104 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.067109 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.067114 | orchestrator | 2026-01-03 00:49:54.067121 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-03 00:49:54.067127 | orchestrator | Saturday 03 January 2026 00:47:50 +0000 (0:00:00.671) 0:02:16.050 ****** 2026-01-03 00:49:54.067132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:49:54.067137 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:49:54.067147 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:49:54.067152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:49:54.067158 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:49:54.067163 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:49:54.067168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:49:54.067173 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:49:54.067178 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:49:54.067208 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-03 00:49:54.067214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:49:54.067218 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:49:54.067223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-03 00:49:54.067228 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:49:54.067233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:49:54.067247 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:49:54.067253 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:49:54.067258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:49:54.067263 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:49:54.067268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:49:54.067274 | orchestrator | 2026-01-03 00:49:54.067279 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-03 00:49:54.067284 | orchestrator | 2026-01-03 00:49:54.067289 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-03 00:49:54.067294 | orchestrator | Saturday 03 January 2026 00:47:53 +0000 (0:00:03.058) 0:02:19.109 ****** 2026-01-03 00:49:54.067299 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.067304 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.067309 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.067314 | orchestrator | 2026-01-03 00:49:54.067320 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-03 00:49:54.067325 | orchestrator | Saturday 03 January 2026 00:47:53 +0000 (0:00:00.505) 0:02:19.615 ****** 2026-01-03 00:49:54.067331 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.067337 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.067434 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.067442 | orchestrator | 2026-01-03 00:49:54.067448 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-03 00:49:54.067453 | orchestrator | Saturday 03 January 2026 00:47:54 +0000 (0:00:00.633) 0:02:20.249 ****** 2026-01-03 00:49:54.067458 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.067463 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.067467 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.067472 | orchestrator | 2026-01-03 00:49:54.067477 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-03 00:49:54.067482 | orchestrator | Saturday 03 January 2026 00:47:54 +0000 (0:00:00.348) 0:02:20.598 ****** 2026-01-03 00:49:54.067487 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:49:54.067492 | orchestrator | 2026-01-03 00:49:54.067497 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-03 00:49:54.067503 | orchestrator | Saturday 03 January 2026 00:47:55 +0000 (0:00:00.676) 0:02:21.274 ****** 2026-01-03 00:49:54.067508 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.067513 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.067518 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.067523 | orchestrator | 2026-01-03 00:49:54.067528 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-03 00:49:54.067533 | orchestrator | Saturday 03 January 2026 00:47:55 +0000 (0:00:00.319) 0:02:21.594 ****** 2026-01-03 00:49:54.067538 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.067543 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.067548 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.067553 | orchestrator | 2026-01-03 00:49:54.067558 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-03 00:49:54.067570 | orchestrator | Saturday 03 January 2026 00:47:56 +0000 (0:00:00.302) 0:02:21.896 ****** 2026-01-03 00:49:54.067575 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.067580 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.067585 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.067590 | orchestrator | 2026-01-03 00:49:54.067595 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-03 00:49:54.067600 | orchestrator | Saturday 03 January 2026 00:47:56 +0000 (0:00:00.278) 0:02:22.174 ****** 2026-01-03 00:49:54.067611 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.067616 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.067621 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.067626 | orchestrator | 2026-01-03 00:49:54.067630 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-03 00:49:54.067640 | orchestrator | Saturday 03 January 2026 00:47:57 +0000 (0:00:00.801) 0:02:22.976 ****** 2026-01-03 00:49:54.067646 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.067651 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.067656 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.067660 | orchestrator | 2026-01-03 00:49:54.067665 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-03 00:49:54.067670 | orchestrator | Saturday 03 January 2026 00:47:58 +0000 (0:00:01.166) 0:02:24.143 ****** 2026-01-03 00:49:54.067675 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.067680 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.067685 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.067690 | orchestrator | 2026-01-03 00:49:54.067695 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-03 00:49:54.067700 | orchestrator | Saturday 03 January 2026 00:47:59 +0000 (0:00:01.242) 0:02:25.386 ****** 2026-01-03 00:49:54.067705 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:54.067709 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:54.067714 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:54.067719 | orchestrator | 2026-01-03 00:49:54.067724 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-03 00:49:54.067729 | orchestrator | 2026-01-03 00:49:54.067734 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-03 00:49:54.067739 | orchestrator | Saturday 03 January 2026 00:48:10 +0000 (0:00:11.132) 0:02:36.518 ****** 2026-01-03 00:49:54.067744 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.067749 | orchestrator | 2026-01-03 00:49:54.067754 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-03 00:49:54.067759 | orchestrator | Saturday 03 January 2026 00:48:11 +0000 (0:00:00.789) 0:02:37.307 ****** 2026-01-03 00:49:54.067763 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.067768 | orchestrator | 2026-01-03 00:49:54.067773 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:49:54.067778 | orchestrator | Saturday 03 January 2026 00:48:12 +0000 (0:00:00.454) 0:02:37.762 ****** 2026-01-03 00:49:54.067783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:49:54.067789 | orchestrator | 2026-01-03 00:49:54.067794 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:49:54.067799 | orchestrator | Saturday 03 January 2026 00:48:12 +0000 (0:00:00.491) 0:02:38.253 ****** 2026-01-03 00:49:54.067804 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.067809 | orchestrator | 2026-01-03 00:49:54.067814 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-03 00:49:54.067820 | orchestrator | Saturday 03 January 2026 00:48:13 +0000 (0:00:00.893) 0:02:39.147 ****** 2026-01-03 00:49:54.067825 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.067831 | orchestrator | 2026-01-03 00:49:54.067836 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-03 00:49:54.067841 | orchestrator | Saturday 03 January 2026 00:48:14 +0000 (0:00:00.664) 0:02:39.812 ****** 2026-01-03 00:49:54.067846 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:49:54.067851 | orchestrator | 2026-01-03 00:49:54.067856 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-03 00:49:54.067861 | orchestrator | Saturday 03 January 2026 00:48:15 +0000 (0:00:01.379) 0:02:41.191 ****** 2026-01-03 00:49:54.067866 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:49:54.067871 | orchestrator | 2026-01-03 00:49:54.067876 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-03 00:49:54.067885 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.693) 0:02:41.884 ****** 2026-01-03 00:49:54.067890 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.067897 | orchestrator | 2026-01-03 00:49:54.067900 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-03 00:49:54.067904 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.414) 0:02:42.298 ****** 2026-01-03 00:49:54.067907 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.067910 | orchestrator | 2026-01-03 00:49:54.067913 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-03 00:49:54.067916 | orchestrator | 2026-01-03 00:49:54.067919 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-03 00:49:54.067923 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.330) 0:02:42.629 ****** 2026-01-03 00:49:54.067926 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.067929 | orchestrator | 2026-01-03 00:49:54.067932 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-03 00:49:54.067935 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.105) 0:02:42.735 ****** 2026-01-03 00:49:54.067939 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:49:54.067942 | orchestrator | 2026-01-03 00:49:54.067945 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-03 00:49:54.067951 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.162) 0:02:42.898 ****** 2026-01-03 00:49:54.067956 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.067964 | orchestrator | 2026-01-03 00:49:54.067972 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-03 00:49:54.067977 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.596) 0:02:43.494 ****** 2026-01-03 00:49:54.067987 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.067992 | orchestrator | 2026-01-03 00:49:54.067997 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-03 00:49:54.068002 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:01.230) 0:02:44.725 ****** 2026-01-03 00:49:54.068007 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.068011 | orchestrator | 2026-01-03 00:49:54.068016 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-03 00:49:54.068021 | orchestrator | Saturday 03 January 2026 00:48:19 +0000 (0:00:00.835) 0:02:45.561 ****** 2026-01-03 00:49:54.068025 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.068030 | orchestrator | 2026-01-03 00:49:54.068035 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-03 00:49:54.068041 | orchestrator | Saturday 03 January 2026 00:48:20 +0000 (0:00:00.414) 0:02:45.975 ****** 2026-01-03 00:49:54.068046 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.068051 | orchestrator | 2026-01-03 00:49:54.068055 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-03 00:49:54.068060 | orchestrator | Saturday 03 January 2026 00:48:26 +0000 (0:00:06.724) 0:02:52.700 ****** 2026-01-03 00:49:54.068065 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.068070 | orchestrator | 2026-01-03 00:49:54.068075 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-03 00:49:54.068080 | orchestrator | Saturday 03 January 2026 00:48:37 +0000 (0:00:11.019) 0:03:03.720 ****** 2026-01-03 00:49:54.068085 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.068091 | orchestrator | 2026-01-03 00:49:54.068096 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-03 00:49:54.068101 | orchestrator | 2026-01-03 00:49:54.068106 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-03 00:49:54.068112 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.488) 0:03:04.209 ****** 2026-01-03 00:49:54.068118 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.068122 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.068125 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.068133 | orchestrator | 2026-01-03 00:49:54.068137 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-03 00:49:54.068140 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.249) 0:03:04.458 ****** 2026-01-03 00:49:54.068143 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068147 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.068150 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.068153 | orchestrator | 2026-01-03 00:49:54.068156 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-03 00:49:54.068159 | orchestrator | Saturday 03 January 2026 00:48:38 +0000 (0:00:00.243) 0:03:04.702 ****** 2026-01-03 00:49:54.068162 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:49:54.068166 | orchestrator | 2026-01-03 00:49:54.068169 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-03 00:49:54.068172 | orchestrator | Saturday 03 January 2026 00:48:39 +0000 (0:00:00.563) 0:03:05.265 ****** 2026-01-03 00:49:54.068175 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068179 | orchestrator | 2026-01-03 00:49:54.068182 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-03 00:49:54.068185 | orchestrator | Saturday 03 January 2026 00:48:40 +0000 (0:00:00.878) 0:03:06.144 ****** 2026-01-03 00:49:54.068189 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068192 | orchestrator | 2026-01-03 00:49:54.068195 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-03 00:49:54.068198 | orchestrator | Saturday 03 January 2026 00:48:41 +0000 (0:00:00.752) 0:03:06.897 ****** 2026-01-03 00:49:54.068201 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068205 | orchestrator | 2026-01-03 00:49:54.068208 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-03 00:49:54.068211 | orchestrator | Saturday 03 January 2026 00:48:41 +0000 (0:00:00.117) 0:03:07.014 ****** 2026-01-03 00:49:54.068214 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068217 | orchestrator | 2026-01-03 00:49:54.068220 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-03 00:49:54.068224 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.841) 0:03:07.856 ****** 2026-01-03 00:49:54.068227 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068230 | orchestrator | 2026-01-03 00:49:54.068233 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-03 00:49:54.068236 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.102) 0:03:07.958 ****** 2026-01-03 00:49:54.068239 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068243 | orchestrator | 2026-01-03 00:49:54.068246 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-03 00:49:54.068249 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.145) 0:03:08.104 ****** 2026-01-03 00:49:54.068252 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068255 | orchestrator | 2026-01-03 00:49:54.068259 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-03 00:49:54.068262 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.113) 0:03:08.217 ****** 2026-01-03 00:49:54.068265 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068268 | orchestrator | 2026-01-03 00:49:54.068271 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-03 00:49:54.068274 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.108) 0:03:08.326 ****** 2026-01-03 00:49:54.068319 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068333 | orchestrator | 2026-01-03 00:49:54.068337 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-03 00:49:54.068340 | orchestrator | Saturday 03 January 2026 00:48:47 +0000 (0:00:05.386) 0:03:13.713 ****** 2026-01-03 00:49:54.068357 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-03 00:49:54.068372 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-03 00:49:54.068376 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-03 00:49:54.068379 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-03 00:49:54.068383 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-03 00:49:54.068386 | orchestrator | 2026-01-03 00:49:54.068389 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-03 00:49:54.068392 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:41.671) 0:03:55.384 ****** 2026-01-03 00:49:54.068395 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068399 | orchestrator | 2026-01-03 00:49:54.068404 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-03 00:49:54.068407 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.916) 0:03:56.301 ****** 2026-01-03 00:49:54.068410 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068414 | orchestrator | 2026-01-03 00:49:54.068417 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-03 00:49:54.068420 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:01.378) 0:03:57.680 ****** 2026-01-03 00:49:54.068423 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:49:54.068426 | orchestrator | 2026-01-03 00:49:54.068430 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-03 00:49:54.068433 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:00.936) 0:03:58.617 ****** 2026-01-03 00:49:54.068436 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068439 | orchestrator | 2026-01-03 00:49:54.068443 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-03 00:49:54.068446 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:00.109) 0:03:58.726 ****** 2026-01-03 00:49:54.068449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-03 00:49:54.068452 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-03 00:49:54.068456 | orchestrator | 2026-01-03 00:49:54.068459 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-03 00:49:54.068462 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:01.579) 0:04:00.305 ****** 2026-01-03 00:49:54.068465 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068468 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.068472 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.068475 | orchestrator | 2026-01-03 00:49:54.068478 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-03 00:49:54.068481 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.366) 0:04:00.672 ****** 2026-01-03 00:49:54.068484 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.068488 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.068491 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.068494 | orchestrator | 2026-01-03 00:49:54.068497 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-03 00:49:54.068500 | orchestrator | 2026-01-03 00:49:54.068504 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-03 00:49:54.068507 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:01.152) 0:04:01.825 ****** 2026-01-03 00:49:54.068510 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:54.068515 | orchestrator | 2026-01-03 00:49:54.068522 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-03 00:49:54.068529 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.149) 0:04:01.975 ****** 2026-01-03 00:49:54.068534 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:49:54.068540 | orchestrator | 2026-01-03 00:49:54.068545 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-03 00:49:54.068555 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.221) 0:04:02.196 ****** 2026-01-03 00:49:54.068561 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:54.068566 | orchestrator | 2026-01-03 00:49:54.068571 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-03 00:49:54.068574 | orchestrator | 2026-01-03 00:49:54.068577 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-03 00:49:54.068581 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:04.661) 0:04:06.858 ****** 2026-01-03 00:49:54.068584 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:54.068587 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:54.068590 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:54.068593 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:54.068596 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:54.068601 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:54.068607 | orchestrator | 2026-01-03 00:49:54.068611 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-03 00:49:54.068614 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:00.908) 0:04:07.766 ****** 2026-01-03 00:49:54.068617 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:49:54.068621 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:49:54.068624 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:49:54.068627 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:49:54.068630 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:49:54.068634 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:49:54.068637 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:49:54.068640 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:49:54.068646 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:49:54.068650 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:49:54.068653 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:49:54.068656 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:49:54.068659 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:49:54.068665 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:49:54.068668 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:49:54.068672 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:49:54.068675 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:49:54.068678 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:49:54.068681 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:49:54.068684 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:49:54.068687 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:49:54.068691 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:49:54.068694 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:49:54.068697 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:49:54.068703 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:49:54.068707 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:49:54.068710 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:49:54.068713 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:49:54.068716 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:49:54.068719 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:49:54.068723 | orchestrator | 2026-01-03 00:49:54.068726 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-03 00:49:54.068729 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:10.523) 0:04:18.290 ****** 2026-01-03 00:49:54.068732 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.068735 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.068739 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.068742 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068745 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.068748 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.068751 | orchestrator | 2026-01-03 00:49:54.068755 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-03 00:49:54.068758 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:00.647) 0:04:18.937 ****** 2026-01-03 00:49:54.068761 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:54.068764 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:54.068767 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:54.068771 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:54.068774 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:54.068777 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:54.068780 | orchestrator | 2026-01-03 00:49:54.068783 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:54.068787 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:54.068791 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-03 00:49:54.068794 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-03 00:49:54.068798 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-03 00:49:54.068801 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:49:54.068804 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:49:54.068807 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:49:54.068810 | orchestrator | 2026-01-03 00:49:54.068814 | orchestrator | 2026-01-03 00:49:54.068817 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:54.068823 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:00.397) 0:04:19.334 ****** 2026-01-03 00:49:54.068826 | orchestrator | =============================================================================== 2026-01-03 00:49:54.068842 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.95s 2026-01-03 00:49:54.068846 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.67s 2026-01-03 00:49:54.068851 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.98s 2026-01-03 00:49:54.068855 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.13s 2026-01-03 00:49:54.068860 | orchestrator | kubectl : Install required packages ------------------------------------ 11.02s 2026-01-03 00:49:54.068863 | orchestrator | Manage labels ---------------------------------------------------------- 10.52s 2026-01-03 00:49:54.068867 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.72s 2026-01-03 00:49:54.068870 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.39s 2026-01-03 00:49:54.068873 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.22s 2026-01-03 00:49:54.068876 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.66s 2026-01-03 00:49:54.068879 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.06s 2026-01-03 00:49:54.068883 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.77s 2026-01-03 00:49:54.068886 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.71s 2026-01-03 00:49:54.068889 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.19s 2026-01-03 00:49:54.068892 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.77s 2026-01-03 00:49:54.068895 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.71s 2026-01-03 00:49:54.068899 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.71s 2026-01-03 00:49:54.068902 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.66s 2026-01-03 00:49:54.068905 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.59s 2026-01-03 00:49:54.068908 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.58s 2026-01-03 00:49:54.068911 | orchestrator | 2026-01-03 00:49:54 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:54.068915 | orchestrator | 2026-01-03 00:49:54 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:54.068918 | orchestrator | 2026-01-03 00:49:54 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:54.068921 | orchestrator | 2026-01-03 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:57.138899 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:49:57.139409 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:49:57.139963 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:49:57.140589 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task 4f80a29b-aa80-4b70-a7a6-cffaab2ed37a is in state STARTED 2026-01-03 00:49:57.141560 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task 25c941d0-62d6-44c2-aa1c-abd50b2fce12 is in state STARTED 2026-01-03 00:49:57.141833 | orchestrator | 2026-01-03 00:49:57 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:49:57.142772 | orchestrator | 2026-01-03 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:00.217279 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:00.217825 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:00.218829 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:00.220934 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task 4f80a29b-aa80-4b70-a7a6-cffaab2ed37a is in state STARTED 2026-01-03 00:50:00.223179 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task 25c941d0-62d6-44c2-aa1c-abd50b2fce12 is in state STARTED 2026-01-03 00:50:00.223666 | orchestrator | 2026-01-03 00:50:00 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:00.223689 | orchestrator | 2026-01-03 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:03.254180 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:03.254642 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:03.255718 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:03.256455 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task 4f80a29b-aa80-4b70-a7a6-cffaab2ed37a is in state STARTED 2026-01-03 00:50:03.257867 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task 25c941d0-62d6-44c2-aa1c-abd50b2fce12 is in state SUCCESS 2026-01-03 00:50:03.259074 | orchestrator | 2026-01-03 00:50:03 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:03.259365 | orchestrator | 2026-01-03 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:06.295675 | orchestrator | 2026-01-03 00:50:06 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:06.297445 | orchestrator | 2026-01-03 00:50:06 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:06.299143 | orchestrator | 2026-01-03 00:50:06 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:06.300373 | orchestrator | 2026-01-03 00:50:06 | INFO  | Task 4f80a29b-aa80-4b70-a7a6-cffaab2ed37a is in state SUCCESS 2026-01-03 00:50:06.302210 | orchestrator | 2026-01-03 00:50:06 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:06.302240 | orchestrator | 2026-01-03 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:09.358431 | orchestrator | 2026-01-03 00:50:09 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:09.360644 | orchestrator | 2026-01-03 00:50:09 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:09.362936 | orchestrator | 2026-01-03 00:50:09 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:09.365284 | orchestrator | 2026-01-03 00:50:09 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:09.365437 | orchestrator | 2026-01-03 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:12.400312 | orchestrator | 2026-01-03 00:50:12 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:12.400616 | orchestrator | 2026-01-03 00:50:12 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:12.401777 | orchestrator | 2026-01-03 00:50:12 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:12.403024 | orchestrator | 2026-01-03 00:50:12 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:12.403064 | orchestrator | 2026-01-03 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:15.430355 | orchestrator | 2026-01-03 00:50:15 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:15.430480 | orchestrator | 2026-01-03 00:50:15 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:15.431611 | orchestrator | 2026-01-03 00:50:15 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:15.432684 | orchestrator | 2026-01-03 00:50:15 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:15.432715 | orchestrator | 2026-01-03 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:18.453715 | orchestrator | 2026-01-03 00:50:18 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:18.454097 | orchestrator | 2026-01-03 00:50:18 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:18.456369 | orchestrator | 2026-01-03 00:50:18 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:18.458408 | orchestrator | 2026-01-03 00:50:18 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:18.458501 | orchestrator | 2026-01-03 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:21.485555 | orchestrator | 2026-01-03 00:50:21 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:21.486764 | orchestrator | 2026-01-03 00:50:21 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:21.487663 | orchestrator | 2026-01-03 00:50:21 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:21.488628 | orchestrator | 2026-01-03 00:50:21 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:21.488668 | orchestrator | 2026-01-03 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:24.512452 | orchestrator | 2026-01-03 00:50:24 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:24.514324 | orchestrator | 2026-01-03 00:50:24 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:24.515947 | orchestrator | 2026-01-03 00:50:24 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:24.517371 | orchestrator | 2026-01-03 00:50:24 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:24.517434 | orchestrator | 2026-01-03 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:27.562223 | orchestrator | 2026-01-03 00:50:27 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:27.564251 | orchestrator | 2026-01-03 00:50:27 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:27.565243 | orchestrator | 2026-01-03 00:50:27 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:27.566941 | orchestrator | 2026-01-03 00:50:27 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:27.567087 | orchestrator | 2026-01-03 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:30.604899 | orchestrator | 2026-01-03 00:50:30 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:30.604993 | orchestrator | 2026-01-03 00:50:30 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:30.608481 | orchestrator | 2026-01-03 00:50:30 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:30.609027 | orchestrator | 2026-01-03 00:50:30 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:30.609078 | orchestrator | 2026-01-03 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:33.647779 | orchestrator | 2026-01-03 00:50:33 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:33.648422 | orchestrator | 2026-01-03 00:50:33 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:33.649227 | orchestrator | 2026-01-03 00:50:33 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:33.650082 | orchestrator | 2026-01-03 00:50:33 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:33.650194 | orchestrator | 2026-01-03 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:36.685447 | orchestrator | 2026-01-03 00:50:36 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:36.686543 | orchestrator | 2026-01-03 00:50:36 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:36.687964 | orchestrator | 2026-01-03 00:50:36 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:36.688985 | orchestrator | 2026-01-03 00:50:36 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:36.689025 | orchestrator | 2026-01-03 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:39.724123 | orchestrator | 2026-01-03 00:50:39 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:39.724438 | orchestrator | 2026-01-03 00:50:39 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:39.725419 | orchestrator | 2026-01-03 00:50:39 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:39.727385 | orchestrator | 2026-01-03 00:50:39 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:39.727435 | orchestrator | 2026-01-03 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:42.807366 | orchestrator | 2026-01-03 00:50:42 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:42.808413 | orchestrator | 2026-01-03 00:50:42 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:42.810377 | orchestrator | 2026-01-03 00:50:42 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:42.812579 | orchestrator | 2026-01-03 00:50:42 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:42.812634 | orchestrator | 2026-01-03 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:45.841681 | orchestrator | 2026-01-03 00:50:45 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:45.844431 | orchestrator | 2026-01-03 00:50:45 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:45.847300 | orchestrator | 2026-01-03 00:50:45 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:45.848977 | orchestrator | 2026-01-03 00:50:45 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:45.849310 | orchestrator | 2026-01-03 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:48.884236 | orchestrator | 2026-01-03 00:50:48 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:48.884921 | orchestrator | 2026-01-03 00:50:48 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:48.885895 | orchestrator | 2026-01-03 00:50:48 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:48.886704 | orchestrator | 2026-01-03 00:50:48 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:48.886734 | orchestrator | 2026-01-03 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:51.932658 | orchestrator | 2026-01-03 00:50:51 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:51.933315 | orchestrator | 2026-01-03 00:50:51 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:51.935175 | orchestrator | 2026-01-03 00:50:51 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:51.937950 | orchestrator | 2026-01-03 00:50:51 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:51.937998 | orchestrator | 2026-01-03 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:54.974095 | orchestrator | 2026-01-03 00:50:54 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:54.974173 | orchestrator | 2026-01-03 00:50:54 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:54.975647 | orchestrator | 2026-01-03 00:50:54 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:54.976208 | orchestrator | 2026-01-03 00:50:54 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:54.976278 | orchestrator | 2026-01-03 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:58.011358 | orchestrator | 2026-01-03 00:50:58 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:50:58.011566 | orchestrator | 2026-01-03 00:50:58 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:50:58.011605 | orchestrator | 2026-01-03 00:50:58 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:50:58.015328 | orchestrator | 2026-01-03 00:50:58 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:50:58.015421 | orchestrator | 2026-01-03 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:01.034383 | orchestrator | 2026-01-03 00:51:01 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:01.034479 | orchestrator | 2026-01-03 00:51:01 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:01.034820 | orchestrator | 2026-01-03 00:51:01 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:01.036142 | orchestrator | 2026-01-03 00:51:01 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:01.036184 | orchestrator | 2026-01-03 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:04.062005 | orchestrator | 2026-01-03 00:51:04 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:04.062942 | orchestrator | 2026-01-03 00:51:04 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:04.064570 | orchestrator | 2026-01-03 00:51:04 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:04.065814 | orchestrator | 2026-01-03 00:51:04 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:04.065850 | orchestrator | 2026-01-03 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:07.097009 | orchestrator | 2026-01-03 00:51:07 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:07.099425 | orchestrator | 2026-01-03 00:51:07 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:07.101795 | orchestrator | 2026-01-03 00:51:07 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:07.103273 | orchestrator | 2026-01-03 00:51:07 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:07.103402 | orchestrator | 2026-01-03 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:10.134921 | orchestrator | 2026-01-03 00:51:10 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:10.137305 | orchestrator | 2026-01-03 00:51:10 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:10.139561 | orchestrator | 2026-01-03 00:51:10 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:10.141309 | orchestrator | 2026-01-03 00:51:10 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:10.141383 | orchestrator | 2026-01-03 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:13.179806 | orchestrator | 2026-01-03 00:51:13 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:13.180559 | orchestrator | 2026-01-03 00:51:13 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:13.181539 | orchestrator | 2026-01-03 00:51:13 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:13.184202 | orchestrator | 2026-01-03 00:51:13 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:13.184292 | orchestrator | 2026-01-03 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:16.216511 | orchestrator | 2026-01-03 00:51:16 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:16.219065 | orchestrator | 2026-01-03 00:51:16 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:16.222770 | orchestrator | 2026-01-03 00:51:16 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:16.224053 | orchestrator | 2026-01-03 00:51:16 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:16.224093 | orchestrator | 2026-01-03 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:19.251096 | orchestrator | 2026-01-03 00:51:19 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:19.251740 | orchestrator | 2026-01-03 00:51:19 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:19.252435 | orchestrator | 2026-01-03 00:51:19 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:19.253335 | orchestrator | 2026-01-03 00:51:19 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:19.253398 | orchestrator | 2026-01-03 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:22.277981 | orchestrator | 2026-01-03 00:51:22 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:22.278429 | orchestrator | 2026-01-03 00:51:22 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:22.281180 | orchestrator | 2026-01-03 00:51:22 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:22.281594 | orchestrator | 2026-01-03 00:51:22 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:22.281683 | orchestrator | 2026-01-03 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:25.314969 | orchestrator | 2026-01-03 00:51:25 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:25.315632 | orchestrator | 2026-01-03 00:51:25 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:25.316627 | orchestrator | 2026-01-03 00:51:25 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:25.317789 | orchestrator | 2026-01-03 00:51:25 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:25.317836 | orchestrator | 2026-01-03 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:28.343254 | orchestrator | 2026-01-03 00:51:28 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:28.343353 | orchestrator | 2026-01-03 00:51:28 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:28.346130 | orchestrator | 2026-01-03 00:51:28 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:28.348663 | orchestrator | 2026-01-03 00:51:28 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:28.348979 | orchestrator | 2026-01-03 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:31.389764 | orchestrator | 2026-01-03 00:51:31 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:31.390159 | orchestrator | 2026-01-03 00:51:31 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:31.390960 | orchestrator | 2026-01-03 00:51:31 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:31.391635 | orchestrator | 2026-01-03 00:51:31 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:31.391688 | orchestrator | 2026-01-03 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:34.427410 | orchestrator | 2026-01-03 00:51:34 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:34.430143 | orchestrator | 2026-01-03 00:51:34 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:34.433286 | orchestrator | 2026-01-03 00:51:34 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:34.435121 | orchestrator | 2026-01-03 00:51:34 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:34.435351 | orchestrator | 2026-01-03 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:37.459434 | orchestrator | 2026-01-03 00:51:37 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:37.460541 | orchestrator | 2026-01-03 00:51:37 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:37.462176 | orchestrator | 2026-01-03 00:51:37 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:37.463556 | orchestrator | 2026-01-03 00:51:37 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:37.463967 | orchestrator | 2026-01-03 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:40.501490 | orchestrator | 2026-01-03 00:51:40 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:40.501819 | orchestrator | 2026-01-03 00:51:40 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state STARTED 2026-01-03 00:51:40.502810 | orchestrator | 2026-01-03 00:51:40 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:40.505647 | orchestrator | 2026-01-03 00:51:40 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:40.505712 | orchestrator | 2026-01-03 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:43.541199 | orchestrator | 2026-01-03 00:51:43 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:43.541535 | orchestrator | 2026-01-03 00:51:43 | INFO  | Task a020e1ac-95d2-4721-b954-2967995d32d1 is in state SUCCESS 2026-01-03 00:51:43.541904 | orchestrator | 2026-01-03 00:51:43.541979 | orchestrator | 2026-01-03 00:51:43.541985 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-03 00:51:43.541990 | orchestrator | 2026-01-03 00:51:43.541994 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:51:43.541999 | orchestrator | Saturday 03 January 2026 00:49:58 +0000 (0:00:00.173) 0:00:00.173 ****** 2026-01-03 00:51:43.542006 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:51:43.542046 | orchestrator | 2026-01-03 00:51:43.542054 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:51:43.542060 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.668) 0:00:00.842 ****** 2026-01-03 00:51:43.542067 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:43.542074 | orchestrator | 2026-01-03 00:51:43.542081 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-03 00:51:43.542087 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:01.047) 0:00:01.889 ****** 2026-01-03 00:51:43.542094 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:43.542100 | orchestrator | 2026-01-03 00:51:43.542107 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:43.542115 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:51:43.542124 | orchestrator | 2026-01-03 00:51:43.542130 | orchestrator | 2026-01-03 00:51:43.542154 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:43.542161 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.403) 0:00:02.293 ****** 2026-01-03 00:51:43.542168 | orchestrator | =============================================================================== 2026-01-03 00:51:43.542175 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2026-01-03 00:51:43.542182 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2026-01-03 00:51:43.542188 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-01-03 00:51:43.542195 | orchestrator | 2026-01-03 00:51:43.542201 | orchestrator | 2026-01-03 00:51:43.542207 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-03 00:51:43.542213 | orchestrator | 2026-01-03 00:51:43.542219 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-03 00:51:43.542241 | orchestrator | Saturday 03 January 2026 00:49:58 +0000 (0:00:00.180) 0:00:00.180 ****** 2026-01-03 00:51:43.542249 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:43.542256 | orchestrator | 2026-01-03 00:51:43.542262 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-03 00:51:43.542269 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.495) 0:00:00.676 ****** 2026-01-03 00:51:43.542275 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:43.542281 | orchestrator | 2026-01-03 00:51:43.542288 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:51:43.542294 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.489) 0:00:01.166 ****** 2026-01-03 00:51:43.542300 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:51:43.542306 | orchestrator | 2026-01-03 00:51:43.542312 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:51:43.542318 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.663) 0:00:01.830 ****** 2026-01-03 00:51:43.542344 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:43.542351 | orchestrator | 2026-01-03 00:51:43.542357 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-03 00:51:43.542364 | orchestrator | Saturday 03 January 2026 00:50:02 +0000 (0:00:01.800) 0:00:03.630 ****** 2026-01-03 00:51:43.542370 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:43.542451 | orchestrator | 2026-01-03 00:51:43.542459 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-03 00:51:43.542466 | orchestrator | Saturday 03 January 2026 00:50:02 +0000 (0:00:00.474) 0:00:04.105 ****** 2026-01-03 00:51:43.542473 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:51:43.542479 | orchestrator | 2026-01-03 00:51:43.542485 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-03 00:51:43.542492 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:01.398) 0:00:05.504 ****** 2026-01-03 00:51:43.542498 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:51:43.542505 | orchestrator | 2026-01-03 00:51:43.542511 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-03 00:51:43.542517 | orchestrator | Saturday 03 January 2026 00:50:04 +0000 (0:00:00.664) 0:00:06.168 ****** 2026-01-03 00:51:43.542523 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:43.542530 | orchestrator | 2026-01-03 00:51:43.542536 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-03 00:51:43.542543 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:00.380) 0:00:06.548 ****** 2026-01-03 00:51:43.542550 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:43.542556 | orchestrator | 2026-01-03 00:51:43.542562 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:43.542569 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:51:43.542575 | orchestrator | 2026-01-03 00:51:43.542582 | orchestrator | 2026-01-03 00:51:43.542588 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:43.542594 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:00.297) 0:00:06.846 ****** 2026-01-03 00:51:43.542601 | orchestrator | =============================================================================== 2026-01-03 00:51:43.542607 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.80s 2026-01-03 00:51:43.542613 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.40s 2026-01-03 00:51:43.542620 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.66s 2026-01-03 00:51:43.542638 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2026-01-03 00:51:43.542643 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2026-01-03 00:51:43.542648 | orchestrator | Create .kube directory -------------------------------------------------- 0.49s 2026-01-03 00:51:43.542652 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2026-01-03 00:51:43.542656 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-01-03 00:51:43.542661 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-01-03 00:51:43.542665 | orchestrator | 2026-01-03 00:51:43.543715 | orchestrator | 2026-01-03 00:51:43.543739 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-03 00:51:43.543744 | orchestrator | 2026-01-03 00:51:43.543748 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-03 00:51:43.543752 | orchestrator | Saturday 03 January 2026 00:48:39 +0000 (0:00:00.181) 0:00:00.181 ****** 2026-01-03 00:51:43.543756 | orchestrator | ok: [localhost] => { 2026-01-03 00:51:43.543762 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-03 00:51:43.543766 | orchestrator | } 2026-01-03 00:51:43.543781 | orchestrator | 2026-01-03 00:51:43.543785 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-03 00:51:43.543798 | orchestrator | Saturday 03 January 2026 00:48:39 +0000 (0:00:00.056) 0:00:00.238 ****** 2026-01-03 00:51:43.543803 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-03 00:51:43.543809 | orchestrator | ...ignoring 2026-01-03 00:51:43.543812 | orchestrator | 2026-01-03 00:51:43.543817 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-03 00:51:43.543820 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:03.071) 0:00:03.309 ****** 2026-01-03 00:51:43.543824 | orchestrator | skipping: [localhost] 2026-01-03 00:51:43.543828 | orchestrator | 2026-01-03 00:51:43.543832 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-03 00:51:43.543836 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.040) 0:00:03.350 ****** 2026-01-03 00:51:43.543843 | orchestrator | ok: [localhost] 2026-01-03 00:51:43.543847 | orchestrator | 2026-01-03 00:51:43.543851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:51:43.543855 | orchestrator | 2026-01-03 00:51:43.543858 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:51:43.543862 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.144) 0:00:03.494 ****** 2026-01-03 00:51:43.543866 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:43.543870 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:43.543873 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:43.543877 | orchestrator | 2026-01-03 00:51:43.543881 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:51:43.543885 | orchestrator | Saturday 03 January 2026 00:48:43 +0000 (0:00:00.291) 0:00:03.785 ****** 2026-01-03 00:51:43.543888 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-03 00:51:43.543893 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-03 00:51:43.543896 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-03 00:51:43.543900 | orchestrator | 2026-01-03 00:51:43.543904 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-03 00:51:43.543907 | orchestrator | 2026-01-03 00:51:43.543911 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:43.543915 | orchestrator | Saturday 03 January 2026 00:48:44 +0000 (0:00:00.882) 0:00:04.668 ****** 2026-01-03 00:51:43.543919 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:43.543923 | orchestrator | 2026-01-03 00:51:43.543927 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-03 00:51:43.543931 | orchestrator | Saturday 03 January 2026 00:48:44 +0000 (0:00:00.693) 0:00:05.362 ****** 2026-01-03 00:51:43.543934 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:43.543938 | orchestrator | 2026-01-03 00:51:43.543950 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-03 00:51:43.543954 | orchestrator | Saturday 03 January 2026 00:48:45 +0000 (0:00:00.917) 0:00:06.279 ****** 2026-01-03 00:51:43.543957 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.543961 | orchestrator | 2026-01-03 00:51:43.543965 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-03 00:51:43.543969 | orchestrator | Saturday 03 January 2026 00:48:46 +0000 (0:00:00.427) 0:00:06.707 ****** 2026-01-03 00:51:43.543973 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.543976 | orchestrator | 2026-01-03 00:51:43.543980 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-03 00:51:43.543984 | orchestrator | Saturday 03 January 2026 00:48:46 +0000 (0:00:00.371) 0:00:07.078 ****** 2026-01-03 00:51:43.543988 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.543991 | orchestrator | 2026-01-03 00:51:43.543995 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-03 00:51:43.544001 | orchestrator | Saturday 03 January 2026 00:48:46 +0000 (0:00:00.495) 0:00:07.574 ****** 2026-01-03 00:51:43.544005 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544009 | orchestrator | 2026-01-03 00:51:43.544013 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:43.544016 | orchestrator | Saturday 03 January 2026 00:48:47 +0000 (0:00:00.667) 0:00:08.242 ****** 2026-01-03 00:51:43.544020 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:43.544024 | orchestrator | 2026-01-03 00:51:43.544028 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-03 00:51:43.544032 | orchestrator | Saturday 03 January 2026 00:48:48 +0000 (0:00:00.560) 0:00:08.802 ****** 2026-01-03 00:51:43.544035 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:43.544039 | orchestrator | 2026-01-03 00:51:43.544043 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-03 00:51:43.544046 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:00.910) 0:00:09.713 ****** 2026-01-03 00:51:43.544050 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544054 | orchestrator | 2026-01-03 00:51:43.544058 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-03 00:51:43.544061 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:00.341) 0:00:10.054 ****** 2026-01-03 00:51:43.544065 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544069 | orchestrator | 2026-01-03 00:51:43.544082 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-03 00:51:43.544085 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:00.533) 0:00:10.587 ****** 2026-01-03 00:51:43.544095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544115 | orchestrator | 2026-01-03 00:51:43.544119 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-03 00:51:43.544123 | orchestrator | Saturday 03 January 2026 00:48:51 +0000 (0:00:01.474) 0:00:12.061 ****** 2026-01-03 00:51:43.544131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544177 | orchestrator | 2026-01-03 00:51:43.544184 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-03 00:51:43.544190 | orchestrator | Saturday 03 January 2026 00:48:52 +0000 (0:00:01.549) 0:00:13.611 ****** 2026-01-03 00:51:43.544196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:43.544204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:43.544210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:43.544214 | orchestrator | 2026-01-03 00:51:43.544218 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-03 00:51:43.544222 | orchestrator | Saturday 03 January 2026 00:48:54 +0000 (0:00:01.849) 0:00:15.460 ****** 2026-01-03 00:51:43.544226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:43.544229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:43.544233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:43.544237 | orchestrator | 2026-01-03 00:51:43.544240 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-03 00:51:43.544244 | orchestrator | Saturday 03 January 2026 00:48:56 +0000 (0:00:01.788) 0:00:17.248 ****** 2026-01-03 00:51:43.544248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:43.544252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:43.544255 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:43.544259 | orchestrator | 2026-01-03 00:51:43.544263 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-03 00:51:43.544266 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:01.166) 0:00:18.414 ****** 2026-01-03 00:51:43.544273 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:43.544277 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:43.544281 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:43.544285 | orchestrator | 2026-01-03 00:51:43.544288 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-03 00:51:43.544292 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:01.396) 0:00:19.811 ****** 2026-01-03 00:51:43.544296 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:43.544299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:43.544303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:43.544307 | orchestrator | 2026-01-03 00:51:43.544310 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-03 00:51:43.544314 | orchestrator | Saturday 03 January 2026 00:49:01 +0000 (0:00:02.035) 0:00:21.846 ****** 2026-01-03 00:51:43.544318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:43.544321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:43.544327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:43.544334 | orchestrator | 2026-01-03 00:51:43.544338 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:43.544342 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:01.641) 0:00:23.487 ****** 2026-01-03 00:51:43.544346 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:43.544351 | orchestrator | 2026-01-03 00:51:43.544356 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-03 00:51:43.544362 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.742) 0:00:24.229 ****** 2026-01-03 00:51:43.544369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544396 | orchestrator | 2026-01-03 00:51:43.544402 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-03 00:51:43.544412 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:01.810) 0:00:26.040 ****** 2026-01-03 00:51:43.544421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544435 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544441 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:43.544450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544457 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:43.544463 | orchestrator | 2026-01-03 00:51:43.544469 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-03 00:51:43.544475 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:00.501) 0:00:26.541 ****** 2026-01-03 00:51:43.544503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544516 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544530 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:43.544536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544544 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:43.544551 | orchestrator | 2026-01-03 00:51:43.544556 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-03 00:51:43.544560 | orchestrator | Saturday 03 January 2026 00:49:06 +0000 (0:00:00.953) 0:00:27.495 ****** 2026-01-03 00:51:43.544570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:43.544598 | orchestrator | 2026-01-03 00:51:43.544605 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-03 00:51:43.544611 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.916) 0:00:28.412 ****** 2026-01-03 00:51:43.544618 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:51:43.544624 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:51:43.544631 | orchestrator | } 2026-01-03 00:51:43.544637 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:51:43.544643 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:51:43.544648 | orchestrator | } 2026-01-03 00:51:43.544652 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:51:43.544656 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:51:43.544660 | orchestrator | } 2026-01-03 00:51:43.544664 | orchestrator | 2026-01-03 00:51:43.544667 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:51:43.544671 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.520) 0:00:28.932 ****** 2026-01-03 00:51:43.544680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544688 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544698 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:43.544702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:51:43.544707 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:43.544710 | orchestrator | 2026-01-03 00:51:43.544714 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-03 00:51:43.544718 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.620) 0:00:29.553 ****** 2026-01-03 00:51:43.544722 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:43.544725 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:43.544729 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:43.544733 | orchestrator | 2026-01-03 00:51:43.544736 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-03 00:51:43.544740 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.775) 0:00:30.328 ****** 2026-01-03 00:51:43.544744 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:43.544748 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:43.544751 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:43.544755 | orchestrator | 2026-01-03 00:51:43.544759 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-03 00:51:43.544763 | orchestrator | Saturday 03 January 2026 00:49:17 +0000 (0:00:08.059) 0:00:38.388 ****** 2026-01-03 00:51:43.544770 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:43.544773 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:43.544777 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:43.544781 | orchestrator | 2026-01-03 00:51:43.544785 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:43.544788 | orchestrator | 2026-01-03 00:51:43.544792 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:43.544796 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:00.425) 0:00:38.813 ****** 2026-01-03 00:51:43.544800 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:43.544803 | orchestrator | 2026-01-03 00:51:43.544807 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:43.544811 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:00.602) 0:00:39.415 ****** 2026-01-03 00:51:43.544814 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:43.544818 | orchestrator | 2026-01-03 00:51:43.544822 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:43.544826 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:00.111) 0:00:39.527 ****** 2026-01-03 00:51:43.544829 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:43.544833 | orchestrator | 2026-01-03 00:51:43.544839 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:43.544843 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:06.521) 0:00:46.049 ****** 2026-01-03 00:51:43.544847 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:43.544851 | orchestrator | 2026-01-03 00:51:43.544854 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:43.544858 | orchestrator | 2026-01-03 00:51:43.544862 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:43.544866 | orchestrator | Saturday 03 January 2026 00:51:12 +0000 (0:01:47.281) 0:02:33.330 ****** 2026-01-03 00:51:43.544870 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:43.544873 | orchestrator | 2026-01-03 00:51:43.544877 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:43.544881 | orchestrator | Saturday 03 January 2026 00:51:13 +0000 (0:00:00.711) 0:02:34.042 ****** 2026-01-03 00:51:43.544885 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:43.544888 | orchestrator | 2026-01-03 00:51:43.544892 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:43.544896 | orchestrator | Saturday 03 January 2026 00:51:13 +0000 (0:00:00.096) 0:02:34.139 ****** 2026-01-03 00:51:43.544900 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:43.544903 | orchestrator | 2026-01-03 00:51:43.544907 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:43.544911 | orchestrator | Saturday 03 January 2026 00:51:15 +0000 (0:00:01.618) 0:02:35.757 ****** 2026-01-03 00:51:43.544915 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:43.544918 | orchestrator | 2026-01-03 00:51:43.544922 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:43.544926 | orchestrator | 2026-01-03 00:51:43.544931 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:43.544935 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:09.795) 0:02:45.552 ****** 2026-01-03 00:51:43.544939 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:43.544943 | orchestrator | 2026-01-03 00:51:43.544947 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:43.544950 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:01.017) 0:02:46.570 ****** 2026-01-03 00:51:43.544954 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:43.544958 | orchestrator | 2026-01-03 00:51:43.544961 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:43.544965 | orchestrator | Saturday 03 January 2026 00:51:26 +0000 (0:00:00.092) 0:02:46.662 ****** 2026-01-03 00:51:43.544969 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:43.544975 | orchestrator | 2026-01-03 00:51:43.544981 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:43.544987 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:02.051) 0:02:48.714 ****** 2026-01-03 00:51:43.544993 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:43.544999 | orchestrator | 2026-01-03 00:51:43.545005 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-03 00:51:43.545011 | orchestrator | 2026-01-03 00:51:43.545016 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-03 00:51:43.545022 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:11.140) 0:02:59.854 ****** 2026-01-03 00:51:43.545028 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:43.545033 | orchestrator | 2026-01-03 00:51:43.545039 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-03 00:51:43.545045 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:00.427) 0:03:00.282 ****** 2026-01-03 00:51:43.545052 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:43.545059 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:43.545065 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:43.545072 | orchestrator | 2026-01-03 00:51:43.545076 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:43.545080 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-03 00:51:43.545085 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:51:43.545088 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:51:43.545092 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:51:43.545096 | orchestrator | 2026-01-03 00:51:43.545100 | orchestrator | 2026-01-03 00:51:43.545103 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:43.545107 | orchestrator | Saturday 03 January 2026 00:51:42 +0000 (0:00:03.216) 0:03:03.498 ****** 2026-01-03 00:51:43.545111 | orchestrator | =============================================================================== 2026-01-03 00:51:43.545115 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 128.22s 2026-01-03 00:51:43.545119 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.19s 2026-01-03 00:51:43.545125 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.06s 2026-01-03 00:51:43.545131 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.22s 2026-01-03 00:51:43.545153 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.07s 2026-01-03 00:51:43.545159 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.33s 2026-01-03 00:51:43.545165 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.04s 2026-01-03 00:51:43.545175 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.85s 2026-01-03 00:51:43.545182 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.81s 2026-01-03 00:51:43.545188 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.79s 2026-01-03 00:51:43.545194 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.64s 2026-01-03 00:51:43.545200 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.55s 2026-01-03 00:51:43.545207 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.47s 2026-01-03 00:51:43.545212 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.40s 2026-01-03 00:51:43.545220 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.17s 2026-01-03 00:51:43.545224 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.95s 2026-01-03 00:51:43.545228 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-01-03 00:51:43.545231 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 0.92s 2026-01-03 00:51:43.545235 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.91s 2026-01-03 00:51:43.545239 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-01-03 00:51:43.545242 | orchestrator | 2026-01-03 00:51:43 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:43.545250 | orchestrator | 2026-01-03 00:51:43 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:43.545306 | orchestrator | 2026-01-03 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:46.574969 | orchestrator | 2026-01-03 00:51:46 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:46.575724 | orchestrator | 2026-01-03 00:51:46 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:46.576723 | orchestrator | 2026-01-03 00:51:46 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:46.576767 | orchestrator | 2026-01-03 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:49.606486 | orchestrator | 2026-01-03 00:51:49 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:49.607927 | orchestrator | 2026-01-03 00:51:49 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:49.610586 | orchestrator | 2026-01-03 00:51:49 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:49.610638 | orchestrator | 2026-01-03 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:52.642902 | orchestrator | 2026-01-03 00:51:52 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:52.645386 | orchestrator | 2026-01-03 00:51:52 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:52.647402 | orchestrator | 2026-01-03 00:51:52 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:52.647474 | orchestrator | 2026-01-03 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:55.678688 | orchestrator | 2026-01-03 00:51:55 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:55.679254 | orchestrator | 2026-01-03 00:51:55 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:55.680100 | orchestrator | 2026-01-03 00:51:55 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:55.681063 | orchestrator | 2026-01-03 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:58.703731 | orchestrator | 2026-01-03 00:51:58 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:51:58.704324 | orchestrator | 2026-01-03 00:51:58 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:51:58.705993 | orchestrator | 2026-01-03 00:51:58 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:51:58.706090 | orchestrator | 2026-01-03 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:01.748662 | orchestrator | 2026-01-03 00:52:01 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:01.749499 | orchestrator | 2026-01-03 00:52:01 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:01.751345 | orchestrator | 2026-01-03 00:52:01 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:01.751385 | orchestrator | 2026-01-03 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:04.799282 | orchestrator | 2026-01-03 00:52:04 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:04.801560 | orchestrator | 2026-01-03 00:52:04 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:04.803071 | orchestrator | 2026-01-03 00:52:04 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:04.803327 | orchestrator | 2026-01-03 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:07.830593 | orchestrator | 2026-01-03 00:52:07 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:07.831712 | orchestrator | 2026-01-03 00:52:07 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:07.832713 | orchestrator | 2026-01-03 00:52:07 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:07.832748 | orchestrator | 2026-01-03 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:10.868897 | orchestrator | 2026-01-03 00:52:10 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:10.869228 | orchestrator | 2026-01-03 00:52:10 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:10.869810 | orchestrator | 2026-01-03 00:52:10 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:10.869836 | orchestrator | 2026-01-03 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:13.918433 | orchestrator | 2026-01-03 00:52:13 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:13.920559 | orchestrator | 2026-01-03 00:52:13 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:13.923274 | orchestrator | 2026-01-03 00:52:13 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:13.923326 | orchestrator | 2026-01-03 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:16.957155 | orchestrator | 2026-01-03 00:52:16 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:16.957338 | orchestrator | 2026-01-03 00:52:16 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:16.958305 | orchestrator | 2026-01-03 00:52:16 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:16.958349 | orchestrator | 2026-01-03 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:19.981335 | orchestrator | 2026-01-03 00:52:19 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:19.982301 | orchestrator | 2026-01-03 00:52:19 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:19.983839 | orchestrator | 2026-01-03 00:52:19 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:19.983876 | orchestrator | 2026-01-03 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:23.012756 | orchestrator | 2026-01-03 00:52:23 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:23.021363 | orchestrator | 2026-01-03 00:52:23 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:23.021461 | orchestrator | 2026-01-03 00:52:23 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:23.021470 | orchestrator | 2026-01-03 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:26.040794 | orchestrator | 2026-01-03 00:52:26 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:26.043693 | orchestrator | 2026-01-03 00:52:26 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:26.044454 | orchestrator | 2026-01-03 00:52:26 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:26.044509 | orchestrator | 2026-01-03 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:29.101880 | orchestrator | 2026-01-03 00:52:29 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:29.104625 | orchestrator | 2026-01-03 00:52:29 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:29.104683 | orchestrator | 2026-01-03 00:52:29 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:29.104692 | orchestrator | 2026-01-03 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:32.138315 | orchestrator | 2026-01-03 00:52:32 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:32.139090 | orchestrator | 2026-01-03 00:52:32 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:32.140066 | orchestrator | 2026-01-03 00:52:32 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:32.140097 | orchestrator | 2026-01-03 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:35.178201 | orchestrator | 2026-01-03 00:52:35 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:35.180799 | orchestrator | 2026-01-03 00:52:35 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:35.181329 | orchestrator | 2026-01-03 00:52:35 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state STARTED 2026-01-03 00:52:35.181350 | orchestrator | 2026-01-03 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:38.219665 | orchestrator | 2026-01-03 00:52:38 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:38.221835 | orchestrator | 2026-01-03 00:52:38 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:38.225512 | orchestrator | 2026-01-03 00:52:38 | INFO  | Task 00eaf01d-fbe5-4d87-a914-586e857b7be4 is in state SUCCESS 2026-01-03 00:52:38.227131 | orchestrator | 2026-01-03 00:52:38.227161 | orchestrator | 2026-01-03 00:52:38.227169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:52:38.227176 | orchestrator | 2026-01-03 00:52:38.227183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:52:38.227190 | orchestrator | Saturday 03 January 2026 00:49:24 +0000 (0:00:00.333) 0:00:00.333 ****** 2026-01-03 00:52:38.227197 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.227204 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.227211 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.227218 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:52:38.227225 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:52:38.227231 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:52:38.227238 | orchestrator | 2026-01-03 00:52:38.227244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:52:38.227251 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:01.056) 0:00:01.390 ****** 2026-01-03 00:52:38.227272 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-03 00:52:38.227279 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-03 00:52:38.227286 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-03 00:52:38.227292 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-03 00:52:38.227300 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-03 00:52:38.227306 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-03 00:52:38.227313 | orchestrator | 2026-01-03 00:52:38.227320 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-03 00:52:38.227326 | orchestrator | 2026-01-03 00:52:38.227332 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-03 00:52:38.227339 | orchestrator | Saturday 03 January 2026 00:49:27 +0000 (0:00:01.208) 0:00:02.598 ****** 2026-01-03 00:52:38.227352 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:52:38.227359 | orchestrator | 2026-01-03 00:52:38.227366 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-03 00:52:38.227373 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:01.034) 0:00:03.633 ****** 2026-01-03 00:52:38.227379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227439 | orchestrator | 2026-01-03 00:52:38.227457 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-03 00:52:38.227465 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:01.507) 0:00:05.140 ****** 2026-01-03 00:52:38.227471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227511 | orchestrator | 2026-01-03 00:52:38.227517 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-03 00:52:38.227528 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:01.746) 0:00:06.886 ****** 2026-01-03 00:52:38.227534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227593 | orchestrator | 2026-01-03 00:52:38.227600 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-03 00:52:38.227609 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:01.105) 0:00:07.991 ****** 2026-01-03 00:52:38.227616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227662 | orchestrator | 2026-01-03 00:52:38.227671 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-03 00:52:38.227678 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:01.606) 0:00:09.598 ****** 2026-01-03 00:52:38.227685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.227724 | orchestrator | 2026-01-03 00:52:38.227731 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-03 00:52:38.227737 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:01.690) 0:00:11.288 ****** 2026-01-03 00:52:38.227744 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:52:38.227755 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227762 | orchestrator | } 2026-01-03 00:52:38.227769 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:52:38.227776 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227782 | orchestrator | } 2026-01-03 00:52:38.227789 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:52:38.227796 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227802 | orchestrator | } 2026-01-03 00:52:38.227810 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 00:52:38.227821 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227827 | orchestrator | } 2026-01-03 00:52:38.227834 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 00:52:38.227844 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227856 | orchestrator | } 2026-01-03 00:52:38.227873 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 00:52:38.227885 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.227895 | orchestrator | } 2026-01-03 00:52:38.227902 | orchestrator | 2026-01-03 00:52:38.227912 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:52:38.227921 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:01.125) 0:00:12.414 ****** 2026-01-03 00:52:38.227933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.227941 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.227952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.227959 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.227966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.227973 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.227980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.227987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.227994 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:52:38.228001 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:52:38.228008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.228052 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:52:38.228060 | orchestrator | 2026-01-03 00:52:38.228068 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-03 00:52:38.228075 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.992) 0:00:13.406 ****** 2026-01-03 00:52:38.228082 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.228089 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.228097 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.228103 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:52:38.228115 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:52:38.228126 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:52:38.228132 | orchestrator | 2026-01-03 00:52:38.228139 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-03 00:52:38.228146 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:03.106) 0:00:16.513 ****** 2026-01-03 00:52:38.228153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-03 00:52:38.228159 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-03 00:52:38.228165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-03 00:52:38.228171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-03 00:52:38.228177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-03 00:52:38.228184 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-03 00:52:38.228190 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228208 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:38.228233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228241 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228255 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228265 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-03 00:52:38.228277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228281 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228297 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:38.228304 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228315 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228319 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:38.228327 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228334 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228342 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:38.228349 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:38.228353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:38.228357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:38.228361 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:38.228364 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:38.228368 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:38.228372 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-03 00:52:38.228376 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-03 00:52:38.228380 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-03 00:52:38.228383 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-03 00:52:38.228390 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-03 00:52:38.228396 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-03 00:52:38.228400 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:38.228404 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:38.228413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:38.228417 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:38.228421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:38.228424 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:38.228428 | orchestrator | 2026-01-03 00:52:38.228432 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228436 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:19.320) 0:00:35.834 ****** 2026-01-03 00:52:38.228440 | orchestrator | 2026-01-03 00:52:38.228444 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228447 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.149) 0:00:35.983 ****** 2026-01-03 00:52:38.228451 | orchestrator | 2026-01-03 00:52:38.228455 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228459 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.148) 0:00:36.132 ****** 2026-01-03 00:52:38.228462 | orchestrator | 2026-01-03 00:52:38.228466 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228470 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.157) 0:00:36.289 ****** 2026-01-03 00:52:38.228474 | orchestrator | 2026-01-03 00:52:38.228477 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228481 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.077) 0:00:36.367 ****** 2026-01-03 00:52:38.228485 | orchestrator | 2026-01-03 00:52:38.228489 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:38.228493 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:00.071) 0:00:36.439 ****** 2026-01-03 00:52:38.228496 | orchestrator | 2026-01-03 00:52:38.228500 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-03 00:52:38.228504 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:00.081) 0:00:36.520 ****** 2026-01-03 00:52:38.228508 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228512 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:52:38.228515 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:52:38.228519 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:52:38.228523 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228527 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228530 | orchestrator | 2026-01-03 00:52:38.228534 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-03 00:52:38.228538 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:02.335) 0:00:38.856 ****** 2026-01-03 00:52:38.228542 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.228545 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:52:38.228549 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:52:38.228553 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.228557 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:52:38.228560 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.228564 | orchestrator | 2026-01-03 00:52:38.228568 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-03 00:52:38.228572 | orchestrator | 2026-01-03 00:52:38.228575 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:38.228579 | orchestrator | Saturday 03 January 2026 00:50:11 +0000 (0:00:08.332) 0:00:47.189 ****** 2026-01-03 00:52:38.228583 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:38.228587 | orchestrator | 2026-01-03 00:52:38.228593 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:38.228600 | orchestrator | Saturday 03 January 2026 00:50:12 +0000 (0:00:00.509) 0:00:47.698 ****** 2026-01-03 00:52:38.228604 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:38.228608 | orchestrator | 2026-01-03 00:52:38.228611 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-03 00:52:38.228615 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:00.797) 0:00:48.495 ****** 2026-01-03 00:52:38.228619 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228622 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228626 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228630 | orchestrator | 2026-01-03 00:52:38.228633 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-03 00:52:38.228637 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:00.747) 0:00:49.243 ****** 2026-01-03 00:52:38.228641 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228645 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228648 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228652 | orchestrator | 2026-01-03 00:52:38.228657 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-03 00:52:38.228663 | orchestrator | Saturday 03 January 2026 00:50:14 +0000 (0:00:00.439) 0:00:49.683 ****** 2026-01-03 00:52:38.228670 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228676 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228684 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228688 | orchestrator | 2026-01-03 00:52:38.228692 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-03 00:52:38.228698 | orchestrator | Saturday 03 January 2026 00:50:14 +0000 (0:00:00.427) 0:00:50.111 ****** 2026-01-03 00:52:38.228702 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228706 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228710 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228714 | orchestrator | 2026-01-03 00:52:38.228718 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-03 00:52:38.228722 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.360) 0:00:50.471 ****** 2026-01-03 00:52:38.228725 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.228729 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.228733 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.228736 | orchestrator | 2026-01-03 00:52:38.228740 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-03 00:52:38.228744 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.334) 0:00:50.806 ****** 2026-01-03 00:52:38.228748 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228752 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228755 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228759 | orchestrator | 2026-01-03 00:52:38.228763 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-03 00:52:38.228767 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.268) 0:00:51.075 ****** 2026-01-03 00:52:38.228770 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228774 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228778 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228782 | orchestrator | 2026-01-03 00:52:38.228789 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-03 00:52:38.228792 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.402) 0:00:51.477 ****** 2026-01-03 00:52:38.228796 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228800 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228804 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228807 | orchestrator | 2026-01-03 00:52:38.228811 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-03 00:52:38.228815 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.267) 0:00:51.744 ****** 2026-01-03 00:52:38.228821 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228825 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228829 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228833 | orchestrator | 2026-01-03 00:52:38.228837 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-03 00:52:38.228840 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.276) 0:00:52.021 ****** 2026-01-03 00:52:38.228844 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228848 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228852 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228856 | orchestrator | 2026-01-03 00:52:38.228862 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-03 00:52:38.228869 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.246) 0:00:52.268 ****** 2026-01-03 00:52:38.228879 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228885 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228891 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228897 | orchestrator | 2026-01-03 00:52:38.228907 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-03 00:52:38.228913 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.378) 0:00:52.646 ****** 2026-01-03 00:52:38.228919 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228926 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228932 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228938 | orchestrator | 2026-01-03 00:52:38.228945 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-03 00:52:38.228951 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.263) 0:00:52.910 ****** 2026-01-03 00:52:38.228957 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228963 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.228969 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.228976 | orchestrator | 2026-01-03 00:52:38.228982 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-03 00:52:38.228987 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.267) 0:00:53.178 ****** 2026-01-03 00:52:38.228991 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.228997 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229005 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229026 | orchestrator | 2026-01-03 00:52:38.229033 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-03 00:52:38.229039 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.275) 0:00:53.453 ****** 2026-01-03 00:52:38.229045 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229051 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229057 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229063 | orchestrator | 2026-01-03 00:52:38.229070 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-03 00:52:38.229077 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.256) 0:00:53.710 ****** 2026-01-03 00:52:38.229082 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229089 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229093 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229097 | orchestrator | 2026-01-03 00:52:38.229101 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-03 00:52:38.229105 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.410) 0:00:54.120 ****** 2026-01-03 00:52:38.229108 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229114 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229120 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229126 | orchestrator | 2026-01-03 00:52:38.229132 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:38.229139 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.259) 0:00:54.380 ****** 2026-01-03 00:52:38.229156 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:38.229163 | orchestrator | 2026-01-03 00:52:38.229180 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-03 00:52:38.229185 | orchestrator | Saturday 03 January 2026 00:50:19 +0000 (0:00:00.545) 0:00:54.925 ****** 2026-01-03 00:52:38.229189 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.229192 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.229196 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.229200 | orchestrator | 2026-01-03 00:52:38.229204 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-03 00:52:38.229208 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.533) 0:00:55.459 ****** 2026-01-03 00:52:38.229211 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.229215 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.229219 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.229223 | orchestrator | 2026-01-03 00:52:38.229226 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-03 00:52:38.229230 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.404) 0:00:55.864 ****** 2026-01-03 00:52:38.229234 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229238 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229242 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229246 | orchestrator | 2026-01-03 00:52:38.229249 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-03 00:52:38.229253 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.324) 0:00:56.188 ****** 2026-01-03 00:52:38.229257 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229261 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229264 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229268 | orchestrator | 2026-01-03 00:52:38.229272 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-03 00:52:38.229276 | orchestrator | Saturday 03 January 2026 00:50:21 +0000 (0:00:00.347) 0:00:56.536 ****** 2026-01-03 00:52:38.229279 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229283 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229287 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229291 | orchestrator | 2026-01-03 00:52:38.229295 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-03 00:52:38.229299 | orchestrator | Saturday 03 January 2026 00:50:21 +0000 (0:00:00.639) 0:00:57.176 ****** 2026-01-03 00:52:38.229302 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229306 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229310 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229314 | orchestrator | 2026-01-03 00:52:38.229317 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-03 00:52:38.229321 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:00.341) 0:00:57.517 ****** 2026-01-03 00:52:38.229325 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229329 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229333 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229338 | orchestrator | 2026-01-03 00:52:38.229348 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-03 00:52:38.229356 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:00.287) 0:00:57.805 ****** 2026-01-03 00:52:38.229362 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.229368 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.229375 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.229382 | orchestrator | 2026-01-03 00:52:38.229388 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-03 00:52:38.229394 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:00.238) 0:00:58.044 ****** 2026-01-03 00:52:38.229402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229524 | orchestrator | 2026-01-03 00:52:38.229531 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-03 00:52:38.229537 | orchestrator | Saturday 03 January 2026 00:50:25 +0000 (0:00:02.596) 0:01:00.640 ****** 2026-01-03 00:52:38.229544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229641 | orchestrator | 2026-01-03 00:52:38.229648 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-03 00:52:38.229654 | orchestrator | Saturday 03 January 2026 00:50:30 +0000 (0:00:05.214) 0:01:05.855 ****** 2026-01-03 00:52:38.229661 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-03 00:52:38.229668 | orchestrator | 2026-01-03 00:52:38.229675 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-03 00:52:38.229682 | orchestrator | Saturday 03 January 2026 00:50:30 +0000 (0:00:00.499) 0:01:06.354 ****** 2026-01-03 00:52:38.229688 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.229694 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.229701 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.229707 | orchestrator | 2026-01-03 00:52:38.229713 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-03 00:52:38.229720 | orchestrator | Saturday 03 January 2026 00:50:31 +0000 (0:00:00.754) 0:01:07.109 ****** 2026-01-03 00:52:38.229726 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.229733 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.229739 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.229745 | orchestrator | 2026-01-03 00:52:38.229752 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-03 00:52:38.229758 | orchestrator | Saturday 03 January 2026 00:50:33 +0000 (0:00:01.820) 0:01:08.930 ****** 2026-01-03 00:52:38.229765 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.229771 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.229777 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.229784 | orchestrator | 2026-01-03 00:52:38.229790 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-03 00:52:38.229797 | orchestrator | Saturday 03 January 2026 00:50:35 +0000 (0:00:01.649) 0:01:10.579 ****** 2026-01-03 00:52:38.229810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.229905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.229912 | orchestrator | 2026-01-03 00:52:38.229918 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-03 00:52:38.229925 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:03.264) 0:01:13.844 ****** 2026-01-03 00:52:38.229931 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:52:38.229938 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.229945 | orchestrator | } 2026-01-03 00:52:38.229951 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:52:38.229957 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.229963 | orchestrator | } 2026-01-03 00:52:38.229970 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:52:38.229976 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.229983 | orchestrator | } 2026-01-03 00:52:38.229989 | orchestrator | 2026-01-03 00:52:38.229996 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:52:38.230002 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:00.326) 0:01:14.170 ****** 2026-01-03 00:52:38.230010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230135 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230139 | orchestrator | 2026-01-03 00:52:38.230142 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-03 00:52:38.230146 | orchestrator | Saturday 03 January 2026 00:50:40 +0000 (0:00:02.224) 0:01:16.395 ****** 2026-01-03 00:52:38.230150 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-03 00:52:38.230154 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-03 00:52:38.230158 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-03 00:52:38.230162 | orchestrator | 2026-01-03 00:52:38.230166 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-03 00:52:38.230172 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:01.110) 0:01:17.506 ****** 2026-01-03 00:52:38.230176 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:52:38.230180 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.230184 | orchestrator | } 2026-01-03 00:52:38.230188 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:52:38.230193 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.230197 | orchestrator | } 2026-01-03 00:52:38.230201 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:52:38.230205 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.230212 | orchestrator | } 2026-01-03 00:52:38.230215 | orchestrator | 2026-01-03 00:52:38.230219 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.230223 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:00.657) 0:01:18.164 ****** 2026-01-03 00:52:38.230227 | orchestrator | 2026-01-03 00:52:38.230231 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.230235 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:00.065) 0:01:18.230 ****** 2026-01-03 00:52:38.230238 | orchestrator | 2026-01-03 00:52:38.230242 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.230246 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:00.063) 0:01:18.294 ****** 2026-01-03 00:52:38.230250 | orchestrator | 2026-01-03 00:52:38.230254 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-03 00:52:38.230257 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:00.065) 0:01:18.360 ****** 2026-01-03 00:52:38.230261 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.230265 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.230269 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.230272 | orchestrator | 2026-01-03 00:52:38.230276 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-03 00:52:38.230280 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:11.833) 0:01:30.194 ****** 2026-01-03 00:52:38.230284 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.230288 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.230291 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.230295 | orchestrator | 2026-01-03 00:52:38.230299 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-03 00:52:38.230303 | orchestrator | Saturday 03 January 2026 00:51:04 +0000 (0:00:09.662) 0:01:39.857 ****** 2026-01-03 00:52:38.230306 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-03 00:52:38.230310 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-03 00:52:38.230314 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-03 00:52:38.230318 | orchestrator | 2026-01-03 00:52:38.230321 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-03 00:52:38.230325 | orchestrator | Saturday 03 January 2026 00:51:16 +0000 (0:00:12.339) 0:01:52.196 ****** 2026-01-03 00:52:38.230329 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.230333 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.230336 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.230340 | orchestrator | 2026-01-03 00:52:38.230344 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-03 00:52:38.230348 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:08.608) 0:02:00.805 ****** 2026-01-03 00:52:38.230351 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.230355 | orchestrator | 2026-01-03 00:52:38.230359 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-03 00:52:38.230363 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:00.115) 0:02:00.921 ****** 2026-01-03 00:52:38.230367 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230371 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230375 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230379 | orchestrator | 2026-01-03 00:52:38.230382 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-03 00:52:38.230389 | orchestrator | Saturday 03 January 2026 00:51:26 +0000 (0:00:00.949) 0:02:01.870 ****** 2026-01-03 00:52:38.230393 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.230396 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.230400 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.230404 | orchestrator | 2026-01-03 00:52:38.230408 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-03 00:52:38.230411 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.577) 0:02:02.448 ****** 2026-01-03 00:52:38.230416 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230420 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230423 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230427 | orchestrator | 2026-01-03 00:52:38.230431 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-03 00:52:38.230435 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.876) 0:02:03.325 ****** 2026-01-03 00:52:38.230440 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.230446 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.230453 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.230459 | orchestrator | 2026-01-03 00:52:38.230465 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-03 00:52:38.230472 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.619) 0:02:03.945 ****** 2026-01-03 00:52:38.230478 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230484 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230490 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230496 | orchestrator | 2026-01-03 00:52:38.230502 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-03 00:52:38.230508 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:00.734) 0:02:04.679 ****** 2026-01-03 00:52:38.230515 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230521 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230527 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230533 | orchestrator | 2026-01-03 00:52:38.230539 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-03 00:52:38.230546 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.741) 0:02:05.420 ****** 2026-01-03 00:52:38.230552 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-03 00:52:38.230559 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-03 00:52:38.230565 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-03 00:52:38.230571 | orchestrator | 2026-01-03 00:52:38.230578 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-03 00:52:38.230584 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.928) 0:02:06.349 ****** 2026-01-03 00:52:38.230591 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230597 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230603 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230610 | orchestrator | 2026-01-03 00:52:38.230619 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-03 00:52:38.230630 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.275) 0:02:06.624 ****** 2026-01-03 00:52:38.230637 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230654 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230675 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230682 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230703 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230721 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230736 | orchestrator | 2026-01-03 00:52:38.230742 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-03 00:52:38.230748 | orchestrator | Saturday 03 January 2026 00:51:34 +0000 (0:00:03.619) 0:02:10.244 ****** 2026-01-03 00:52:38.230755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230762 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230768 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.230845 | orchestrator | 2026-01-03 00:52:38.230855 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-03 00:52:38.230862 | orchestrator | Saturday 03 January 2026 00:51:40 +0000 (0:00:05.404) 0:02:15.649 ****** 2026-01-03 00:52:38.230869 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-03 00:52:38.230875 | orchestrator | 2026-01-03 00:52:38.230882 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-03 00:52:38.230888 | orchestrator | Saturday 03 January 2026 00:51:40 +0000 (0:00:00.580) 0:02:16.230 ****** 2026-01-03 00:52:38.230895 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230899 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230903 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230907 | orchestrator | 2026-01-03 00:52:38.230910 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-03 00:52:38.230914 | orchestrator | Saturday 03 January 2026 00:51:41 +0000 (0:00:00.744) 0:02:16.974 ****** 2026-01-03 00:52:38.230918 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230922 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230925 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230929 | orchestrator | 2026-01-03 00:52:38.230933 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-03 00:52:38.230937 | orchestrator | Saturday 03 January 2026 00:51:42 +0000 (0:00:01.416) 0:02:18.390 ****** 2026-01-03 00:52:38.230940 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.230944 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.230948 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.230952 | orchestrator | 2026-01-03 00:52:38.230955 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-03 00:52:38.230959 | orchestrator | Saturday 03 January 2026 00:51:44 +0000 (0:00:01.775) 0:02:20.166 ****** 2026-01-03 00:52:38.230963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230967 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230971 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.230995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.231000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.231004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.231012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231032 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.231037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231044 | orchestrator | 2026-01-03 00:52:38.231048 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-03 00:52:38.231052 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:05.119) 0:02:25.285 ****** 2026-01-03 00:52:38.231056 | orchestrator | ok: [testbed-node-0] => { 2026-01-03 00:52:38.231060 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231064 | orchestrator | } 2026-01-03 00:52:38.231068 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:52:38.231072 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231075 | orchestrator | } 2026-01-03 00:52:38.231080 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:52:38.231083 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231087 | orchestrator | } 2026-01-03 00:52:38.231091 | orchestrator | 2026-01-03 00:52:38.231096 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:52:38.231099 | orchestrator | Saturday 03 January 2026 00:51:50 +0000 (0:00:00.301) 0:02:25.586 ****** 2026-01-03 00:52:38.231109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:52:38.231156 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:38.231161 | orchestrator | 2026-01-03 00:52:38.231164 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-03 00:52:38.231168 | orchestrator | Saturday 03 January 2026 00:51:51 +0000 (0:00:01.619) 0:02:27.206 ****** 2026-01-03 00:52:38.231172 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-03 00:52:38.231176 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-03 00:52:38.231180 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-03 00:52:38.231184 | orchestrator | 2026-01-03 00:52:38.231188 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-03 00:52:38.231191 | orchestrator | Saturday 03 January 2026 00:51:52 +0000 (0:00:01.155) 0:02:28.362 ****** 2026-01-03 00:52:38.231195 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:52:38.231199 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231203 | orchestrator | } 2026-01-03 00:52:38.231207 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:52:38.231210 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231214 | orchestrator | } 2026-01-03 00:52:38.231218 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:52:38.231222 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:52:38.231226 | orchestrator | } 2026-01-03 00:52:38.231229 | orchestrator | 2026-01-03 00:52:38.231233 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.231240 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:00.471) 0:02:28.833 ****** 2026-01-03 00:52:38.231244 | orchestrator | 2026-01-03 00:52:38.231248 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.231251 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:00.057) 0:02:28.890 ****** 2026-01-03 00:52:38.231255 | orchestrator | 2026-01-03 00:52:38.231259 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:38.231263 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:00.056) 0:02:28.947 ****** 2026-01-03 00:52:38.231267 | orchestrator | 2026-01-03 00:52:38.231271 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-03 00:52:38.231274 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:00.060) 0:02:29.007 ****** 2026-01-03 00:52:38.231278 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.231282 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.231286 | orchestrator | 2026-01-03 00:52:38.231290 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-03 00:52:38.231294 | orchestrator | Saturday 03 January 2026 00:52:04 +0000 (0:00:11.263) 0:02:40.271 ****** 2026-01-03 00:52:38.231297 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:38.231301 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:38.231305 | orchestrator | 2026-01-03 00:52:38.231309 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-03 00:52:38.231312 | orchestrator | Saturday 03 January 2026 00:52:16 +0000 (0:00:11.874) 0:02:52.145 ****** 2026-01-03 00:52:38.231316 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-03 00:52:38.231320 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-03 00:52:38.231324 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-03 00:52:38.231328 | orchestrator | 2026-01-03 00:52:38.231331 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-03 00:52:38.231335 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:13.308) 0:03:05.453 ****** 2026-01-03 00:52:38.231339 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:38.231343 | orchestrator | 2026-01-03 00:52:38.231346 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-03 00:52:38.231350 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:00.115) 0:03:05.569 ****** 2026-01-03 00:52:38.231354 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.231358 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.231362 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.231366 | orchestrator | 2026-01-03 00:52:38.231369 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-03 00:52:38.231373 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:00.775) 0:03:06.345 ****** 2026-01-03 00:52:38.231377 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.231381 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.231385 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.231388 | orchestrator | 2026-01-03 00:52:38.231392 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-03 00:52:38.231396 | orchestrator | Saturday 03 January 2026 00:52:31 +0000 (0:00:00.735) 0:03:07.080 ****** 2026-01-03 00:52:38.231400 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.231404 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.231407 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.231411 | orchestrator | 2026-01-03 00:52:38.231417 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-03 00:52:38.231424 | orchestrator | Saturday 03 January 2026 00:52:32 +0000 (0:00:01.078) 0:03:08.159 ****** 2026-01-03 00:52:38.231428 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:38.231432 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:38.231436 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:38.231440 | orchestrator | 2026-01-03 00:52:38.231444 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-03 00:52:38.231452 | orchestrator | Saturday 03 January 2026 00:52:33 +0000 (0:00:00.659) 0:03:08.818 ****** 2026-01-03 00:52:38.231459 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.231466 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.231472 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.231480 | orchestrator | 2026-01-03 00:52:38.231486 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-03 00:52:38.231493 | orchestrator | Saturday 03 January 2026 00:52:34 +0000 (0:00:00.724) 0:03:09.542 ****** 2026-01-03 00:52:38.231501 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:38.231509 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:38.231515 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:38.231522 | orchestrator | 2026-01-03 00:52:38.231526 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-03 00:52:38.231530 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:00.877) 0:03:10.420 ****** 2026-01-03 00:52:38.231534 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-03 00:52:38.231538 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-03 00:52:38.231542 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-03 00:52:38.231546 | orchestrator | 2026-01-03 00:52:38.231549 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:52:38.231554 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-03 00:52:38.231561 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-03 00:52:38.231567 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-03 00:52:38.231573 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:52:38.231580 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:52:38.231586 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:52:38.231592 | orchestrator | 2026-01-03 00:52:38.231599 | orchestrator | 2026-01-03 00:52:38.231606 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:52:38.231612 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:01.032) 0:03:11.452 ****** 2026-01-03 00:52:38.231618 | orchestrator | =============================================================================== 2026-01-03 00:52:38.231622 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 25.65s 2026-01-03 00:52:38.231626 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 23.10s 2026-01-03 00:52:38.231630 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 21.54s 2026-01-03 00:52:38.231633 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.32s 2026-01-03 00:52:38.231637 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.61s 2026-01-03 00:52:38.231641 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.33s 2026-01-03 00:52:38.231645 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.41s 2026-01-03 00:52:38.231648 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.21s 2026-01-03 00:52:38.231652 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.12s 2026-01-03 00:52:38.231656 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.62s 2026-01-03 00:52:38.231660 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.26s 2026-01-03 00:52:38.231663 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.11s 2026-01-03 00:52:38.231671 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.60s 2026-01-03 00:52:38.231675 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.34s 2026-01-03 00:52:38.231678 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.22s 2026-01-03 00:52:38.231682 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.82s 2026-01-03 00:52:38.231686 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.78s 2026-01-03 00:52:38.231689 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.75s 2026-01-03 00:52:38.231693 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.69s 2026-01-03 00:52:38.231697 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.65s 2026-01-03 00:52:38.231700 | orchestrator | 2026-01-03 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:41.268496 | orchestrator | 2026-01-03 00:52:41 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:41.269840 | orchestrator | 2026-01-03 00:52:41 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:41.269890 | orchestrator | 2026-01-03 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:44.308332 | orchestrator | 2026-01-03 00:52:44 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:44.309635 | orchestrator | 2026-01-03 00:52:44 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:44.309680 | orchestrator | 2026-01-03 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:47.349290 | orchestrator | 2026-01-03 00:52:47 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:47.349854 | orchestrator | 2026-01-03 00:52:47 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:47.349915 | orchestrator | 2026-01-03 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:50.390304 | orchestrator | 2026-01-03 00:52:50 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:50.391317 | orchestrator | 2026-01-03 00:52:50 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:50.391463 | orchestrator | 2026-01-03 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:53.437213 | orchestrator | 2026-01-03 00:52:53 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:53.437535 | orchestrator | 2026-01-03 00:52:53 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:53.437855 | orchestrator | 2026-01-03 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:56.486516 | orchestrator | 2026-01-03 00:52:56 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:56.489227 | orchestrator | 2026-01-03 00:52:56 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:56.489252 | orchestrator | 2026-01-03 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:59.529367 | orchestrator | 2026-01-03 00:52:59 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:52:59.531241 | orchestrator | 2026-01-03 00:52:59 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:52:59.531323 | orchestrator | 2026-01-03 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:02.584111 | orchestrator | 2026-01-03 00:53:02 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:02.585944 | orchestrator | 2026-01-03 00:53:02 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:02.586085 | orchestrator | 2026-01-03 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:05.633533 | orchestrator | 2026-01-03 00:53:05 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:05.636653 | orchestrator | 2026-01-03 00:53:05 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:05.636897 | orchestrator | 2026-01-03 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:08.676460 | orchestrator | 2026-01-03 00:53:08 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:08.677859 | orchestrator | 2026-01-03 00:53:08 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:08.677903 | orchestrator | 2026-01-03 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:11.721671 | orchestrator | 2026-01-03 00:53:11 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:11.723731 | orchestrator | 2026-01-03 00:53:11 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:11.723782 | orchestrator | 2026-01-03 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:14.773162 | orchestrator | 2026-01-03 00:53:14 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:14.774679 | orchestrator | 2026-01-03 00:53:14 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:14.774769 | orchestrator | 2026-01-03 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:17.821292 | orchestrator | 2026-01-03 00:53:17 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:17.823251 | orchestrator | 2026-01-03 00:53:17 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:17.823378 | orchestrator | 2026-01-03 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:20.879691 | orchestrator | 2026-01-03 00:53:20 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:20.881908 | orchestrator | 2026-01-03 00:53:20 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:20.882102 | orchestrator | 2026-01-03 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:23.927053 | orchestrator | 2026-01-03 00:53:23 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:23.932706 | orchestrator | 2026-01-03 00:53:23 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:23.934605 | orchestrator | 2026-01-03 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:26.977678 | orchestrator | 2026-01-03 00:53:26 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:26.979144 | orchestrator | 2026-01-03 00:53:26 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:26.979263 | orchestrator | 2026-01-03 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:30.026699 | orchestrator | 2026-01-03 00:53:30 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:30.027252 | orchestrator | 2026-01-03 00:53:30 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:30.027556 | orchestrator | 2026-01-03 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:33.073202 | orchestrator | 2026-01-03 00:53:33 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:33.074773 | orchestrator | 2026-01-03 00:53:33 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:33.074824 | orchestrator | 2026-01-03 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:36.117973 | orchestrator | 2026-01-03 00:53:36 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:36.120126 | orchestrator | 2026-01-03 00:53:36 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:36.120187 | orchestrator | 2026-01-03 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:39.162758 | orchestrator | 2026-01-03 00:53:39 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:39.165118 | orchestrator | 2026-01-03 00:53:39 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:39.165461 | orchestrator | 2026-01-03 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:42.206127 | orchestrator | 2026-01-03 00:53:42 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:42.206470 | orchestrator | 2026-01-03 00:53:42 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:42.207797 | orchestrator | 2026-01-03 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:45.251069 | orchestrator | 2026-01-03 00:53:45 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:45.252168 | orchestrator | 2026-01-03 00:53:45 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:45.252210 | orchestrator | 2026-01-03 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:48.291301 | orchestrator | 2026-01-03 00:53:48 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:48.293270 | orchestrator | 2026-01-03 00:53:48 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:48.293361 | orchestrator | 2026-01-03 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:51.336429 | orchestrator | 2026-01-03 00:53:51 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:51.338087 | orchestrator | 2026-01-03 00:53:51 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:51.338125 | orchestrator | 2026-01-03 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:54.389296 | orchestrator | 2026-01-03 00:53:54 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:54.391131 | orchestrator | 2026-01-03 00:53:54 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:54.391186 | orchestrator | 2026-01-03 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:57.435020 | orchestrator | 2026-01-03 00:53:57 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:53:57.436956 | orchestrator | 2026-01-03 00:53:57 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:53:57.437057 | orchestrator | 2026-01-03 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:00.480768 | orchestrator | 2026-01-03 00:54:00 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:00.482941 | orchestrator | 2026-01-03 00:54:00 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:00.483134 | orchestrator | 2026-01-03 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:03.525103 | orchestrator | 2026-01-03 00:54:03 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:03.526112 | orchestrator | 2026-01-03 00:54:03 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:03.526160 | orchestrator | 2026-01-03 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:06.572862 | orchestrator | 2026-01-03 00:54:06 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:06.577297 | orchestrator | 2026-01-03 00:54:06 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:06.577472 | orchestrator | 2026-01-03 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:09.620192 | orchestrator | 2026-01-03 00:54:09 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:09.620371 | orchestrator | 2026-01-03 00:54:09 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:09.620612 | orchestrator | 2026-01-03 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:12.655580 | orchestrator | 2026-01-03 00:54:12 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:12.657058 | orchestrator | 2026-01-03 00:54:12 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:12.657139 | orchestrator | 2026-01-03 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:15.689034 | orchestrator | 2026-01-03 00:54:15 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:15.690537 | orchestrator | 2026-01-03 00:54:15 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:15.690726 | orchestrator | 2026-01-03 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:18.729322 | orchestrator | 2026-01-03 00:54:18 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:18.730212 | orchestrator | 2026-01-03 00:54:18 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:18.730261 | orchestrator | 2026-01-03 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:21.776337 | orchestrator | 2026-01-03 00:54:21 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state STARTED 2026-01-03 00:54:21.778008 | orchestrator | 2026-01-03 00:54:21 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:21.780066 | orchestrator | 2026-01-03 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:24.828161 | orchestrator | 2026-01-03 00:54:24 | INFO  | Task db4a7243-38d3-443b-9b54-4ab588763a1a is in state SUCCESS 2026-01-03 00:54:24.828868 | orchestrator | 2026-01-03 00:54:24.828901 | orchestrator | 2026-01-03 00:54:24.828907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:54:24.828915 | orchestrator | 2026-01-03 00:54:24.828922 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:54:24.828931 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.267) 0:00:00.267 ****** 2026-01-03 00:54:24.828939 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.828950 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.828959 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.828966 | orchestrator | 2026-01-03 00:54:24.828972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:54:24.828980 | orchestrator | Saturday 03 January 2026 00:48:16 +0000 (0:00:00.386) 0:00:00.653 ****** 2026-01-03 00:54:24.829018 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-03 00:54:24.829026 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-03 00:54:24.829033 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-03 00:54:24.829040 | orchestrator | 2026-01-03 00:54:24.829062 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-03 00:54:24.829070 | orchestrator | 2026-01-03 00:54:24.829077 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-03 00:54:24.829084 | orchestrator | Saturday 03 January 2026 00:48:17 +0000 (0:00:00.514) 0:00:01.168 ****** 2026-01-03 00:54:24.829091 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.829098 | orchestrator | 2026-01-03 00:54:24.829146 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-03 00:54:24.829153 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:00.702) 0:00:01.871 ****** 2026-01-03 00:54:24.829160 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.829167 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.829171 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.829175 | orchestrator | 2026-01-03 00:54:24.829179 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-03 00:54:24.829183 | orchestrator | Saturday 03 January 2026 00:48:18 +0000 (0:00:00.689) 0:00:02.560 ****** 2026-01-03 00:54:24.829188 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.829192 | orchestrator | 2026-01-03 00:54:24.829195 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-03 00:54:24.829199 | orchestrator | Saturday 03 January 2026 00:48:19 +0000 (0:00:00.666) 0:00:03.227 ****** 2026-01-03 00:54:24.829203 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.829207 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.829210 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.829214 | orchestrator | 2026-01-03 00:54:24.829218 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-03 00:54:24.829221 | orchestrator | Saturday 03 January 2026 00:48:20 +0000 (0:00:00.754) 0:00:03.981 ****** 2026-01-03 00:54:24.829226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829230 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829234 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829238 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:54:24.829244 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:54:24.829255 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:54:24.829258 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:54:24.829262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:54:24.829266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:54:24.829269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:54:24.829273 | orchestrator | 2026-01-03 00:54:24.829277 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-03 00:54:24.829280 | orchestrator | Saturday 03 January 2026 00:48:23 +0000 (0:00:03.672) 0:00:07.653 ****** 2026-01-03 00:54:24.829293 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-03 00:54:24.829299 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-03 00:54:24.829303 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-03 00:54:24.829306 | orchestrator | 2026-01-03 00:54:24.829310 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-03 00:54:24.829314 | orchestrator | Saturday 03 January 2026 00:48:24 +0000 (0:00:00.921) 0:00:08.575 ****** 2026-01-03 00:54:24.829318 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-03 00:54:24.829322 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-03 00:54:24.829325 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-03 00:54:24.829329 | orchestrator | 2026-01-03 00:54:24.829333 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-03 00:54:24.829337 | orchestrator | Saturday 03 January 2026 00:48:26 +0000 (0:00:01.635) 0:00:10.210 ****** 2026-01-03 00:54:24.829340 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-03 00:54:24.829344 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.829360 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-03 00:54:24.829364 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.829368 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-03 00:54:24.829371 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.829375 | orchestrator | 2026-01-03 00:54:24.829379 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-03 00:54:24.829383 | orchestrator | Saturday 03 January 2026 00:48:27 +0000 (0:00:00.559) 0:00:10.770 ****** 2026-01-03 00:54:24.829390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.829582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.829596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.829603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.829608 | orchestrator | 2026-01-03 00:54:24.829614 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-03 00:54:24.829632 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:02.033) 0:00:12.804 ****** 2026-01-03 00:54:24.829646 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.829653 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.829661 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.829668 | orchestrator | 2026-01-03 00:54:24.829674 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-03 00:54:24.829680 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:01.442) 0:00:14.247 ****** 2026-01-03 00:54:24.829687 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-03 00:54:24.829694 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-03 00:54:24.829706 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-03 00:54:24.829713 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-03 00:54:24.829720 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-03 00:54:24.829727 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-03 00:54:24.829733 | orchestrator | 2026-01-03 00:54:24.829740 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-03 00:54:24.829746 | orchestrator | Saturday 03 January 2026 00:48:32 +0000 (0:00:02.131) 0:00:16.378 ****** 2026-01-03 00:54:24.829753 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.829759 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.829766 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.829772 | orchestrator | 2026-01-03 00:54:24.829816 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-03 00:54:24.829823 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:01.310) 0:00:17.688 ****** 2026-01-03 00:54:24.829828 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.829835 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.829840 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.829846 | orchestrator | 2026-01-03 00:54:24.829852 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-03 00:54:24.829858 | orchestrator | Saturday 03 January 2026 00:48:36 +0000 (0:00:02.174) 0:00:19.862 ****** 2026-01-03 00:54:24.829865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.829880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.829891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.829897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.829904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.829916 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.829923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.829929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.829936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.829949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.829959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.829966 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.829987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.830000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.830006 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.830339 | orchestrator | 2026-01-03 00:54:24.830352 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-03 00:54:24.830359 | orchestrator | Saturday 03 January 2026 00:48:36 +0000 (0:00:00.789) 0:00:20.652 ****** 2026-01-03 00:54:24.830366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.830420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.830426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.830440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.830452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.830479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6', '__omit_place_holder__86aff0f421f0e2c84a7e9d4eafe7a6454c387fc6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:54:24.830485 | orchestrator | 2026-01-03 00:54:24.830492 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-03 00:54:24.830498 | orchestrator | Saturday 03 January 2026 00:48:40 +0000 (0:00:03.581) 0:00:24.234 ****** 2026-01-03 00:54:24.830505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.830567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.830573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.830579 | orchestrator | 2026-01-03 00:54:24.830585 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-03 00:54:24.830591 | orchestrator | Saturday 03 January 2026 00:48:43 +0000 (0:00:02.863) 0:00:27.097 ****** 2026-01-03 00:54:24.830598 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:54:24.830604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:54:24.830610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:54:24.830616 | orchestrator | 2026-01-03 00:54:24.830623 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-03 00:54:24.830629 | orchestrator | Saturday 03 January 2026 00:48:45 +0000 (0:00:01.957) 0:00:29.054 ****** 2026-01-03 00:54:24.830635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:54:24.830642 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:54:24.830648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:54:24.830660 | orchestrator | 2026-01-03 00:54:24.830670 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-03 00:54:24.830677 | orchestrator | Saturday 03 January 2026 00:48:48 +0000 (0:00:03.345) 0:00:32.400 ****** 2026-01-03 00:54:24.830683 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.830690 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.830696 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.830703 | orchestrator | 2026-01-03 00:54:24.830709 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-03 00:54:24.830715 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:00.756) 0:00:33.157 ****** 2026-01-03 00:54:24.830722 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:54:24.830731 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:54:24.830741 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:54:24.830749 | orchestrator | 2026-01-03 00:54:24.830753 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-03 00:54:24.830756 | orchestrator | Saturday 03 January 2026 00:48:52 +0000 (0:00:03.331) 0:00:36.488 ****** 2026-01-03 00:54:24.830760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:54:24.830764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:54:24.830768 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:54:24.830772 | orchestrator | 2026-01-03 00:54:24.830775 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-03 00:54:24.830853 | orchestrator | Saturday 03 January 2026 00:48:54 +0000 (0:00:02.057) 0:00:38.546 ****** 2026-01-03 00:54:24.830857 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.830861 | orchestrator | 2026-01-03 00:54:24.830865 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-03 00:54:24.830868 | orchestrator | Saturday 03 January 2026 00:48:55 +0000 (0:00:00.689) 0:00:39.236 ****** 2026-01-03 00:54:24.830873 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-03 00:54:24.830877 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-03 00:54:24.830881 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-03 00:54:24.830885 | orchestrator | 2026-01-03 00:54:24.830888 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-03 00:54:24.830892 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:01.502) 0:00:40.738 ****** 2026-01-03 00:54:24.830896 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-03 00:54:24.830900 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-03 00:54:24.830904 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-03 00:54:24.830907 | orchestrator | 2026-01-03 00:54:24.830911 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-03 00:54:24.830915 | orchestrator | Saturday 03 January 2026 00:48:58 +0000 (0:00:01.803) 0:00:42.542 ****** 2026-01-03 00:54:24.830919 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.830922 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.830926 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.830930 | orchestrator | 2026-01-03 00:54:24.830933 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-03 00:54:24.830937 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.366) 0:00:42.909 ****** 2026-01-03 00:54:24.830941 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.830950 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.830954 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.830957 | orchestrator | 2026-01-03 00:54:24.830961 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-03 00:54:24.830966 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.771) 0:00:43.680 ****** 2026-01-03 00:54:24.830971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.830995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831032 | orchestrator | 2026-01-03 00:54:24.831036 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-03 00:54:24.831041 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:03.848) 0:00:47.529 ****** 2026-01-03 00:54:24.831049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831063 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.831067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831084 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.831091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831108 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.831113 | orchestrator | 2026-01-03 00:54:24.831117 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-03 00:54:24.831122 | orchestrator | Saturday 03 January 2026 00:49:04 +0000 (0:00:01.048) 0:00:48.577 ****** 2026-01-03 00:54:24.831129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831142 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.831149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831170 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.831176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831201 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.831208 | orchestrator | 2026-01-03 00:54:24.831214 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-03 00:54:24.831220 | orchestrator | Saturday 03 January 2026 00:49:06 +0000 (0:00:01.328) 0:00:49.906 ****** 2026-01-03 00:54:24.831227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:54:24.831234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:54:24.831242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:54:24.831248 | orchestrator | 2026-01-03 00:54:24.831255 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-03 00:54:24.831262 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:01.389) 0:00:51.296 ****** 2026-01-03 00:54:24.831268 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:54:24.831278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:54:24.831284 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:54:24.831290 | orchestrator | 2026-01-03 00:54:24.831296 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-03 00:54:24.831302 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:01.533) 0:00:52.829 ****** 2026-01-03 00:54:24.831308 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:54:24.831313 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:54:24.831319 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:54:24.831325 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:54:24.831332 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.831343 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:54:24.831349 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.831364 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:54:24.831370 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.831376 | orchestrator | 2026-01-03 00:54:24.831382 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-03 00:54:24.831389 | orchestrator | Saturday 03 January 2026 00:49:10 +0000 (0:00:01.288) 0:00:54.117 ****** 2026-01-03 00:54:24.831395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.831537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.831558 | orchestrator | 2026-01-03 00:54:24.831564 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-03 00:54:24.831570 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:02.418) 0:00:56.535 ****** 2026-01-03 00:54:24.831577 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:54:24.831583 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.831589 | orchestrator | } 2026-01-03 00:54:24.831595 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:54:24.831601 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.831607 | orchestrator | } 2026-01-03 00:54:24.831614 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:54:24.831619 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.831626 | orchestrator | } 2026-01-03 00:54:24.831632 | orchestrator | 2026-01-03 00:54:24.831638 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:54:24.831644 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.691) 0:00:57.227 ****** 2026-01-03 00:54:24.831651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.831663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.831674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.831684 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.832490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.832521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.832530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.832538 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.832545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.832552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.832569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.832576 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.832582 | orchestrator | 2026-01-03 00:54:24.832588 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-03 00:54:24.832595 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:01.207) 0:00:58.435 ****** 2026-01-03 00:54:24.832601 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.832608 | orchestrator | 2026-01-03 00:54:24.832614 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-03 00:54:24.832620 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:00.531) 0:00:58.966 ****** 2026-01-03 00:54:24.832640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.832649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.832683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.832700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832736 | orchestrator | 2026-01-03 00:54:24.832744 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-03 00:54:24.832750 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:03.080) 0:01:02.047 ****** 2026-01-03 00:54:24.832759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.832766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832807 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.832814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.832822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832844 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.832850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.832861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.832867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.832883 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.832889 | orchestrator | 2026-01-03 00:54:24.832937 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-03 00:54:24.832943 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:00.566) 0:01:02.614 ****** 2026-01-03 00:54:24.832954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.832964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.832972 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.832979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.832983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.832987 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.832991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.832999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.833003 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.833006 | orchestrator | 2026-01-03 00:54:24.833010 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-03 00:54:24.833014 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:00.837) 0:01:03.452 ****** 2026-01-03 00:54:24.833018 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.833022 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.833025 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.833029 | orchestrator | 2026-01-03 00:54:24.833033 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-03 00:54:24.833037 | orchestrator | Saturday 03 January 2026 00:49:21 +0000 (0:00:01.389) 0:01:04.842 ****** 2026-01-03 00:54:24.833040 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.833044 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.833050 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.833056 | orchestrator | 2026-01-03 00:54:24.833062 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-03 00:54:24.833068 | orchestrator | Saturday 03 January 2026 00:49:23 +0000 (0:00:01.913) 0:01:06.755 ****** 2026-01-03 00:54:24.833074 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.833080 | orchestrator | 2026-01-03 00:54:24.833086 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-03 00:54:24.833093 | orchestrator | Saturday 03 January 2026 00:49:23 +0000 (0:00:00.829) 0:01:07.585 ****** 2026-01-03 00:54:24.833100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.833113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.833166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.833193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833214 | orchestrator | 2026-01-03 00:54:24.833220 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-03 00:54:24.833227 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:04.227) 0:01:11.813 ****** 2026-01-03 00:54:24.833233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.833240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.833251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833683 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.833690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833696 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.833703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.833710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.833735 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.833742 | orchestrator | 2026-01-03 00:54:24.833748 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-03 00:54:24.833754 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.875) 0:01:12.689 ****** 2026-01-03 00:54:24.833762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.833768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.833775 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.833957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.833966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.833973 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.834191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.834202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.834208 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.834215 | orchestrator | 2026-01-03 00:54:24.834221 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-03 00:54:24.834228 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.996) 0:01:13.685 ****** 2026-01-03 00:54:24.834234 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.834240 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.834246 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.834252 | orchestrator | 2026-01-03 00:54:24.834258 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-03 00:54:24.834265 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:01.254) 0:01:14.940 ****** 2026-01-03 00:54:24.834271 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.834277 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.834283 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.834289 | orchestrator | 2026-01-03 00:54:24.834295 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-03 00:54:24.834302 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:02.290) 0:01:17.230 ****** 2026-01-03 00:54:24.834308 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.834314 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.834319 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.834326 | orchestrator | 2026-01-03 00:54:24.834332 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-03 00:54:24.834338 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.324) 0:01:17.554 ****** 2026-01-03 00:54:24.834344 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.834357 | orchestrator | 2026-01-03 00:54:24.834363 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-03 00:54:24.834369 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:01.122) 0:01:18.676 ****** 2026-01-03 00:54:24.834381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:54:24.834459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:54:24.834468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:54:24.834474 | orchestrator | 2026-01-03 00:54:24.834480 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-03 00:54:24.834487 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:02.993) 0:01:21.669 ****** 2026-01-03 00:54:24.834493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:54:24.834500 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.834511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:54:24.834517 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.834541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:54:24.834548 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.834553 | orchestrator | 2026-01-03 00:54:24.834559 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-03 00:54:24.834565 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:01.697) 0:01:23.367 ****** 2026-01-03 00:54:24.834573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834587 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.834593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834606 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.834611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:54:24.834628 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.834634 | orchestrator | 2026-01-03 00:54:24.834641 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-03 00:54:24.834647 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:01.728) 0:01:25.095 ****** 2026-01-03 00:54:24.834653 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.834658 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.834929 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.834942 | orchestrator | 2026-01-03 00:54:24.834946 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-03 00:54:24.834950 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:00.747) 0:01:25.843 ****** 2026-01-03 00:54:24.834954 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.834963 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.835000 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.835006 | orchestrator | 2026-01-03 00:54:24.835010 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-03 00:54:24.835014 | orchestrator | Saturday 03 January 2026 00:49:44 +0000 (0:00:02.671) 0:01:28.514 ****** 2026-01-03 00:54:24.835018 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.835022 | orchestrator | 2026-01-03 00:54:24.835026 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-03 00:54:24.835043 | orchestrator | Saturday 03 January 2026 00:49:45 +0000 (0:00:00.823) 0:01:29.338 ****** 2026-01-03 00:54:24.835049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835171 | orchestrator | 2026-01-03 00:54:24.835175 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-03 00:54:24.835179 | orchestrator | Saturday 03 January 2026 00:49:48 +0000 (0:00:03.293) 0:01:32.631 ****** 2026-01-03 00:54:24.835183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.835191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.835201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835231 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.835250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835258 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.835276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.835282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835298 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.835302 | orchestrator | 2026-01-03 00:54:24.835306 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-03 00:54:24.835310 | orchestrator | Saturday 03 January 2026 00:49:49 +0000 (0:00:00.803) 0:01:33.434 ****** 2026-01-03 00:54:24.835314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835323 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.835356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835365 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.835372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.835380 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.835384 | orchestrator | 2026-01-03 00:54:24.835398 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-03 00:54:24.835402 | orchestrator | Saturday 03 January 2026 00:49:50 +0000 (0:00:00.912) 0:01:34.347 ****** 2026-01-03 00:54:24.835406 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.835410 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.835413 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.835417 | orchestrator | 2026-01-03 00:54:24.835421 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-03 00:54:24.835429 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:01.216) 0:01:35.563 ****** 2026-01-03 00:54:24.835432 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.835436 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.835440 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.835444 | orchestrator | 2026-01-03 00:54:24.835447 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-03 00:54:24.835451 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:01.646) 0:01:37.210 ****** 2026-01-03 00:54:24.835455 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.835459 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.835462 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.835466 | orchestrator | 2026-01-03 00:54:24.835470 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-03 00:54:24.835474 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:00.274) 0:01:37.484 ****** 2026-01-03 00:54:24.835477 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.835481 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.835485 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.835489 | orchestrator | 2026-01-03 00:54:24.835492 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-03 00:54:24.835496 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:00.242) 0:01:37.727 ****** 2026-01-03 00:54:24.835500 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.835504 | orchestrator | 2026-01-03 00:54:24.835507 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-03 00:54:24.835511 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:00.957) 0:01:38.685 ****** 2026-01-03 00:54:24.835516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.835528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.835849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.835866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.835887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.835963 | orchestrator | 2026-01-03 00:54:24.835967 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-03 00:54:24.835971 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:05.084) 0:01:43.769 ****** 2026-01-03 00:54:24.835989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.835994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.835998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836027 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.836042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.836046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.836050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836087 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.836091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.836095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:54:24.836099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.836156 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.836160 | orchestrator | 2026-01-03 00:54:24.836163 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-03 00:54:24.836167 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:01.041) 0:01:44.810 ****** 2026-01-03 00:54:24.836172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836182 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.836185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836197 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.836201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.836208 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.836246 | orchestrator | 2026-01-03 00:54:24.836250 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-03 00:54:24.836254 | orchestrator | Saturday 03 January 2026 00:50:02 +0000 (0:00:01.409) 0:01:46.220 ****** 2026-01-03 00:54:24.836258 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.836262 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.836266 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.836518 | orchestrator | 2026-01-03 00:54:24.836529 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-03 00:54:24.836533 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:01.135) 0:01:47.355 ****** 2026-01-03 00:54:24.836537 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.836540 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.836544 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.836548 | orchestrator | 2026-01-03 00:54:24.836552 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-03 00:54:24.836556 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:01.766) 0:01:49.122 ****** 2026-01-03 00:54:24.836560 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.836563 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.836570 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.836574 | orchestrator | 2026-01-03 00:54:24.836578 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-03 00:54:24.836582 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:00.277) 0:01:49.399 ****** 2026-01-03 00:54:24.836585 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.836589 | orchestrator | 2026-01-03 00:54:24.836593 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-03 00:54:24.836597 | orchestrator | Saturday 03 January 2026 00:50:06 +0000 (0:00:00.882) 0:01:50.283 ****** 2026-01-03 00:54:24.836615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:54:24.836630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.836646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:54:24.836661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.836678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:54:24.836683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.836692 | orchestrator | 2026-01-03 00:54:24.836696 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-03 00:54:24.836700 | orchestrator | Saturday 03 January 2026 00:50:10 +0000 (0:00:03.726) 0:01:54.009 ****** 2026-01-03 00:54:24.836717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:54:24.836722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.836731 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.836745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:54:24.836771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.837026 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:54:24.837060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.837070 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837074 | orchestrator | 2026-01-03 00:54:24.837078 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-03 00:54:24.837082 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:03.001) 0:01:57.010 ****** 2026-01-03 00:54:24.837086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837100 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837122 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:54:24.837138 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837142 | orchestrator | 2026-01-03 00:54:24.837146 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-03 00:54:24.837149 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:02.953) 0:01:59.964 ****** 2026-01-03 00:54:24.837153 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.837157 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.837161 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.837164 | orchestrator | 2026-01-03 00:54:24.837168 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-03 00:54:24.837172 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:01.088) 0:02:01.053 ****** 2026-01-03 00:54:24.837176 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.837179 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.837183 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.837187 | orchestrator | 2026-01-03 00:54:24.837191 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-03 00:54:24.837194 | orchestrator | Saturday 03 January 2026 00:50:19 +0000 (0:00:01.864) 0:02:02.917 ****** 2026-01-03 00:54:24.837198 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837202 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837206 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837209 | orchestrator | 2026-01-03 00:54:24.837213 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-03 00:54:24.837217 | orchestrator | Saturday 03 January 2026 00:50:19 +0000 (0:00:00.280) 0:02:03.198 ****** 2026-01-03 00:54:24.837221 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.837225 | orchestrator | 2026-01-03 00:54:24.837228 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-03 00:54:24.837232 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.766) 0:02:03.964 ****** 2026-01-03 00:54:24.837237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.837244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.837261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.837265 | orchestrator | 2026-01-03 00:54:24.837269 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-03 00:54:24.837273 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:03.213) 0:02:07.177 ****** 2026-01-03 00:54:24.837277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.837281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.837285 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837289 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.837297 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837300 | orchestrator | 2026-01-03 00:54:24.837304 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-03 00:54:24.837312 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:00.463) 0:02:07.642 ****** 2026-01-03 00:54:24.837317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837328 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837350 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.837362 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837366 | orchestrator | 2026-01-03 00:54:24.837370 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-03 00:54:24.837373 | orchestrator | Saturday 03 January 2026 00:50:24 +0000 (0:00:00.543) 0:02:08.185 ****** 2026-01-03 00:54:24.837377 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.837381 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.837385 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.837388 | orchestrator | 2026-01-03 00:54:24.837392 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-03 00:54:24.837430 | orchestrator | Saturday 03 January 2026 00:50:25 +0000 (0:00:01.489) 0:02:09.674 ****** 2026-01-03 00:54:24.837434 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.837438 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.837442 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.837446 | orchestrator | 2026-01-03 00:54:24.837450 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-03 00:54:24.837454 | orchestrator | Saturday 03 January 2026 00:50:28 +0000 (0:00:02.051) 0:02:11.726 ****** 2026-01-03 00:54:24.837457 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837461 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837465 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837469 | orchestrator | 2026-01-03 00:54:24.837473 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-03 00:54:24.837477 | orchestrator | Saturday 03 January 2026 00:50:28 +0000 (0:00:00.310) 0:02:12.036 ****** 2026-01-03 00:54:24.837480 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.837484 | orchestrator | 2026-01-03 00:54:24.837488 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-03 00:54:24.837687 | orchestrator | Saturday 03 January 2026 00:50:29 +0000 (0:00:00.882) 0:02:12.918 ****** 2026-01-03 00:54:24.837713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:54:24.837724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:54:24.837748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:54:24.837753 | orchestrator | 2026-01-03 00:54:24.837757 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-03 00:54:24.837760 | orchestrator | Saturday 03 January 2026 00:50:32 +0000 (0:00:03.500) 0:02:16.419 ****** 2026-01-03 00:54:24.837765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:54:24.837773 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:54:24.837823 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:54:24.837841 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.837847 | orchestrator | 2026-01-03 00:54:24.837857 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-03 00:54:24.837866 | orchestrator | Saturday 03 January 2026 00:50:33 +0000 (0:00:00.661) 0:02:17.080 ****** 2026-01-03 00:54:24.837878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.837902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.837910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.837918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.837925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:54:24.837932 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.837938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.837944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.837957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.837964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.837970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:54:24.837977 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.837983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.837989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.837996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-03 00:54:24.838006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:54:24.838465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:54:24.838486 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.838490 | orchestrator | 2026-01-03 00:54:24.838495 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-03 00:54:24.838499 | orchestrator | Saturday 03 January 2026 00:50:34 +0000 (0:00:01.003) 0:02:18.083 ****** 2026-01-03 00:54:24.838504 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.838508 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.838512 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.838516 | orchestrator | 2026-01-03 00:54:24.838520 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-03 00:54:24.838525 | orchestrator | Saturday 03 January 2026 00:50:35 +0000 (0:00:01.328) 0:02:19.412 ****** 2026-01-03 00:54:24.838529 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.838533 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.838537 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.838541 | orchestrator | 2026-01-03 00:54:24.838545 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-03 00:54:24.838549 | orchestrator | Saturday 03 January 2026 00:50:37 +0000 (0:00:01.990) 0:02:21.403 ****** 2026-01-03 00:54:24.838552 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.838556 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.838569 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.838573 | orchestrator | 2026-01-03 00:54:24.838577 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-03 00:54:24.838581 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:00.344) 0:02:21.747 ****** 2026-01-03 00:54:24.838585 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.838589 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.838592 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.838596 | orchestrator | 2026-01-03 00:54:24.838600 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-03 00:54:24.838604 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:00.287) 0:02:22.035 ****** 2026-01-03 00:54:24.838608 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.838612 | orchestrator | 2026-01-03 00:54:24.838616 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-03 00:54:24.838620 | orchestrator | Saturday 03 January 2026 00:50:39 +0000 (0:00:01.163) 0:02:23.199 ****** 2026-01-03 00:54:24.838624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:54:24.838631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.838640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:54:24.839119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.839124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:54:24.839129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.839379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839412 | orchestrator | 2026-01-03 00:54:24.839418 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-03 00:54:24.839425 | orchestrator | Saturday 03 January 2026 00:50:43 +0000 (0:00:03.830) 0:02:27.029 ****** 2026-01-03 00:54:24.839431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:54:24.839437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.839444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839450 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.839463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:54:24.839527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.839543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839550 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.839556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:54:24.839562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:54:24.839568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:54:24.839574 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.839580 | orchestrator | 2026-01-03 00:54:24.839699 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-03 00:54:24.839705 | orchestrator | Saturday 03 January 2026 00:50:44 +0000 (0:00:00.883) 0:02:27.913 ****** 2026-01-03 00:54:24.839715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839770 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.839823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839839 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.839845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-03 00:54:24.839858 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.839865 | orchestrator | 2026-01-03 00:54:24.840099 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-03 00:54:24.840108 | orchestrator | Saturday 03 January 2026 00:50:45 +0000 (0:00:01.151) 0:02:29.065 ****** 2026-01-03 00:54:24.840112 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.840116 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.840120 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.840124 | orchestrator | 2026-01-03 00:54:24.840128 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-03 00:54:24.840132 | orchestrator | Saturday 03 January 2026 00:50:46 +0000 (0:00:01.069) 0:02:30.135 ****** 2026-01-03 00:54:24.840136 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.840140 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.840144 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.840147 | orchestrator | 2026-01-03 00:54:24.840151 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-03 00:54:24.840188 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:01.776) 0:02:31.911 ****** 2026-01-03 00:54:24.840193 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.840197 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.840688 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.840742 | orchestrator | 2026-01-03 00:54:24.840748 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-03 00:54:24.840752 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:00.284) 0:02:32.196 ****** 2026-01-03 00:54:24.840756 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.840760 | orchestrator | 2026-01-03 00:54:24.840764 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-03 00:54:24.840768 | orchestrator | Saturday 03 January 2026 00:50:49 +0000 (0:00:00.938) 0:02:33.135 ****** 2026-01-03 00:54:24.840773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.840868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.840880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.840887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.840893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.840907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.840911 | orchestrator | 2026-01-03 00:54:24.840918 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-03 00:54:24.840922 | orchestrator | Saturday 03 January 2026 00:50:52 +0000 (0:00:02.681) 0:02:35.816 ****** 2026-01-03 00:54:24.841085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841101 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.841106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841120 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.841172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841183 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.841186 | orchestrator | 2026-01-03 00:54:24.841190 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-03 00:54:24.841194 | orchestrator | Saturday 03 January 2026 00:50:52 +0000 (0:00:00.596) 0:02:36.412 ****** 2026-01-03 00:54:24.841199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841209 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.841212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841225 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.841228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841236 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.841240 | orchestrator | 2026-01-03 00:54:24.841244 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-03 00:54:24.841247 | orchestrator | Saturday 03 January 2026 00:50:53 +0000 (0:00:00.825) 0:02:37.238 ****** 2026-01-03 00:54:24.841251 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.841255 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.841259 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.841262 | orchestrator | 2026-01-03 00:54:24.841266 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-03 00:54:24.841270 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:01.412) 0:02:38.651 ****** 2026-01-03 00:54:24.841274 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.841277 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.841281 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.841285 | orchestrator | 2026-01-03 00:54:24.841289 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-03 00:54:24.841292 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:02.403) 0:02:41.054 ****** 2026-01-03 00:54:24.841296 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.841300 | orchestrator | 2026-01-03 00:54:24.841304 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-03 00:54:24.841310 | orchestrator | Saturday 03 January 2026 00:50:58 +0000 (0:00:01.234) 0:02:42.289 ****** 2026-01-03 00:54:24.841348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.841355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.841368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.841444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841459 | orchestrator | 2026-01-03 00:54:24.841463 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-03 00:54:24.841467 | orchestrator | Saturday 03 January 2026 00:51:03 +0000 (0:00:04.803) 0:02:47.092 ****** 2026-01-03 00:54:24.841500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841521 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.841528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.841566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.841658 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.841666 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.841670 | orchestrator | 2026-01-03 00:54:24.841674 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-03 00:54:24.841683 | orchestrator | Saturday 03 January 2026 00:51:04 +0000 (0:00:00.662) 0:02:47.754 ****** 2026-01-03 00:54:24.841687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841695 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.841699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841707 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.841711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.841718 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.841722 | orchestrator | 2026-01-03 00:54:24.841726 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-03 00:54:24.841730 | orchestrator | Saturday 03 January 2026 00:51:04 +0000 (0:00:00.765) 0:02:48.520 ****** 2026-01-03 00:54:24.841734 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.841737 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.841741 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.841745 | orchestrator | 2026-01-03 00:54:24.841749 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-03 00:54:24.841753 | orchestrator | Saturday 03 January 2026 00:51:06 +0000 (0:00:01.388) 0:02:49.909 ****** 2026-01-03 00:54:24.841756 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.841760 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.841764 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.841768 | orchestrator | 2026-01-03 00:54:24.841772 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-03 00:54:24.841808 | orchestrator | Saturday 03 January 2026 00:51:08 +0000 (0:00:01.916) 0:02:51.825 ****** 2026-01-03 00:54:24.841815 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.841821 | orchestrator | 2026-01-03 00:54:24.841827 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-03 00:54:24.841833 | orchestrator | Saturday 03 January 2026 00:51:09 +0000 (0:00:01.206) 0:02:53.031 ****** 2026-01-03 00:54:24.841837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:54:24.841841 | orchestrator | 2026-01-03 00:54:24.841845 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-03 00:54:24.841849 | orchestrator | Saturday 03 January 2026 00:51:12 +0000 (0:00:03.199) 0:02:56.231 ****** 2026-01-03 00:54:24.841893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.841913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.841917 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.841922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.841936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.841972 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.841978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.841983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.841987 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.841991 | orchestrator | 2026-01-03 00:54:24.841994 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-03 00:54:24.841998 | orchestrator | Saturday 03 January 2026 00:51:14 +0000 (0:00:01.917) 0:02:58.149 ****** 2026-01-03 00:54:24.842089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.842102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.842106 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.842115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.842123 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:54:24.842159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:54:24.842162 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842166 | orchestrator | 2026-01-03 00:54:24.842170 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-03 00:54:24.842174 | orchestrator | Saturday 03 January 2026 00:51:16 +0000 (0:00:01.897) 0:03:00.047 ****** 2026-01-03 00:54:24.842179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842191 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842245 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:54:24.842331 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842335 | orchestrator | 2026-01-03 00:54:24.842338 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-03 00:54:24.842343 | orchestrator | Saturday 03 January 2026 00:51:20 +0000 (0:00:03.737) 0:03:03.784 ****** 2026-01-03 00:54:24.842346 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.842350 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.842354 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.842358 | orchestrator | 2026-01-03 00:54:24.842362 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-03 00:54:24.842366 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:02.292) 0:03:06.077 ****** 2026-01-03 00:54:24.842369 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842373 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842377 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842380 | orchestrator | 2026-01-03 00:54:24.842384 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-03 00:54:24.842392 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:01.620) 0:03:07.697 ****** 2026-01-03 00:54:24.842396 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842400 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842403 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842407 | orchestrator | 2026-01-03 00:54:24.842411 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-03 00:54:24.842414 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:00.243) 0:03:07.941 ****** 2026-01-03 00:54:24.842418 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.842422 | orchestrator | 2026-01-03 00:54:24.842426 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-03 00:54:24.842429 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:01.144) 0:03:09.085 ****** 2026-01-03 00:54:24.842441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:54:24.842487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:54:24.842493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:54:24.842497 | orchestrator | 2026-01-03 00:54:24.842501 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-03 00:54:24.842505 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:01.669) 0:03:10.755 ****** 2026-01-03 00:54:24.842509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:54:24.842520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:54:24.842524 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842528 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:54:24.842536 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842539 | orchestrator | 2026-01-03 00:54:24.842543 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-03 00:54:24.842551 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.331) 0:03:11.086 ****** 2026-01-03 00:54:24.842555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:54:24.842589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:54:24.842594 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842598 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:54:24.842605 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842609 | orchestrator | 2026-01-03 00:54:24.842613 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-03 00:54:24.842617 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.713) 0:03:11.800 ****** 2026-01-03 00:54:24.842621 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842624 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842628 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842632 | orchestrator | 2026-01-03 00:54:24.842636 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-03 00:54:24.842642 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.376) 0:03:12.177 ****** 2026-01-03 00:54:24.842653 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842659 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842666 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842672 | orchestrator | 2026-01-03 00:54:24.842679 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-03 00:54:24.842685 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:01.044) 0:03:13.221 ****** 2026-01-03 00:54:24.842692 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.842698 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.842704 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.842711 | orchestrator | 2026-01-03 00:54:24.842717 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-03 00:54:24.842724 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:00.272) 0:03:13.494 ****** 2026-01-03 00:54:24.842729 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.842733 | orchestrator | 2026-01-03 00:54:24.842747 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-03 00:54:24.842751 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:01.326) 0:03:14.820 ****** 2026-01-03 00:54:24.842756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.842762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.842858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.842871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.842877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.842884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.842889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.842897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.842922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.842931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.842936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.842940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.842945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.842954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.842980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.842989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.842993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.842997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.843004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.843028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.843049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.843092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.843105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.843180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.843191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.843195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.843252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.843273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.843325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843329 | orchestrator | 2026-01-03 00:54:24.843333 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-03 00:54:24.843338 | orchestrator | Saturday 03 January 2026 00:51:35 +0000 (0:00:04.624) 0:03:19.445 ****** 2026-01-03 00:54:24.843342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.843346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.843391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.843396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.843449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.843471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.843520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843526 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.843530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.843541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.843575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.843610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.843628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.843692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.843713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843725 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.843735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.843811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-03 00:54:24.843824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-03 00:54:24.843834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-03 00:54:24.843887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-03 00:54:24.843903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-03 00:54:24.843911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.843942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:54:24.843956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:54:24.843960 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.843964 | orchestrator | 2026-01-03 00:54:24.843968 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-03 00:54:24.843972 | orchestrator | Saturday 03 January 2026 00:51:37 +0000 (0:00:01.952) 0:03:21.397 ****** 2026-01-03 00:54:24.843976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.843986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.843991 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.843995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.843999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844002 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.844006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844014 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.844018 | orchestrator | 2026-01-03 00:54:24.844022 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-03 00:54:24.844025 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:01.459) 0:03:22.856 ****** 2026-01-03 00:54:24.844029 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844033 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844037 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844040 | orchestrator | 2026-01-03 00:54:24.844044 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-03 00:54:24.844048 | orchestrator | Saturday 03 January 2026 00:51:40 +0000 (0:00:01.349) 0:03:24.206 ****** 2026-01-03 00:54:24.844052 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844055 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844062 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844066 | orchestrator | 2026-01-03 00:54:24.844070 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-03 00:54:24.844074 | orchestrator | Saturday 03 January 2026 00:51:42 +0000 (0:00:01.946) 0:03:26.152 ****** 2026-01-03 00:54:24.844078 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.844081 | orchestrator | 2026-01-03 00:54:24.844085 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-03 00:54:24.844089 | orchestrator | Saturday 03 January 2026 00:51:44 +0000 (0:00:01.562) 0:03:27.715 ****** 2026-01-03 00:54:24.844123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.844134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.844139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.844143 | orchestrator | 2026-01-03 00:54:24.844147 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-03 00:54:24.844160 | orchestrator | Saturday 03 January 2026 00:51:47 +0000 (0:00:03.961) 0:03:31.676 ****** 2026-01-03 00:54:24.844196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.844203 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.844207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.844215 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.844219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.844224 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.844228 | orchestrator | 2026-01-03 00:54:24.844232 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-03 00:54:24.844236 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:00.437) 0:03:32.114 ****** 2026-01-03 00:54:24.844240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844250 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.844254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844265 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.844269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.844307 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.844311 | orchestrator | 2026-01-03 00:54:24.844320 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-03 00:54:24.844324 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:00.792) 0:03:32.906 ****** 2026-01-03 00:54:24.844328 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844332 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844335 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844339 | orchestrator | 2026-01-03 00:54:24.844343 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-03 00:54:24.844347 | orchestrator | Saturday 03 January 2026 00:51:50 +0000 (0:00:01.195) 0:03:34.102 ****** 2026-01-03 00:54:24.844351 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844355 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844358 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844362 | orchestrator | 2026-01-03 00:54:24.844366 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-03 00:54:24.844374 | orchestrator | Saturday 03 January 2026 00:51:52 +0000 (0:00:01.989) 0:03:36.092 ****** 2026-01-03 00:54:24.844378 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.844389 | orchestrator | 2026-01-03 00:54:24.844393 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-03 00:54:24.844397 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:01.145) 0:03:37.237 ****** 2026-01-03 00:54:24.844401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.844514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844527 | orchestrator | 2026-01-03 00:54:24.844531 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-03 00:54:24.844535 | orchestrator | Saturday 03 January 2026 00:51:59 +0000 (0:00:05.604) 0:03:42.841 ****** 2026-01-03 00:54:24.844554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844592 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.844610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844623 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.844627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.844654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.844662 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.844666 | orchestrator | 2026-01-03 00:54:24.844670 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-03 00:54:24.844674 | orchestrator | Saturday 03 January 2026 00:51:59 +0000 (0:00:00.671) 0:03:43.512 ****** 2026-01-03 00:54:24.844678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844718 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.844724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844730 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.844736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.844813 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.844819 | orchestrator | 2026-01-03 00:54:24.844826 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-03 00:54:24.844831 | orchestrator | Saturday 03 January 2026 00:52:00 +0000 (0:00:00.839) 0:03:44.352 ****** 2026-01-03 00:54:24.844835 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844839 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844843 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844847 | orchestrator | 2026-01-03 00:54:24.844850 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-03 00:54:24.844854 | orchestrator | Saturday 03 January 2026 00:52:01 +0000 (0:00:01.290) 0:03:45.643 ****** 2026-01-03 00:54:24.844858 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.844862 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.844865 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.844869 | orchestrator | 2026-01-03 00:54:24.844873 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-03 00:54:24.844877 | orchestrator | Saturday 03 January 2026 00:52:03 +0000 (0:00:01.615) 0:03:47.258 ****** 2026-01-03 00:54:24.844880 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.844884 | orchestrator | 2026-01-03 00:54:24.844888 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-03 00:54:24.844892 | orchestrator | Saturday 03 January 2026 00:52:04 +0000 (0:00:01.332) 0:03:48.591 ****** 2026-01-03 00:54:24.844896 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-03 00:54:24.844900 | orchestrator | 2026-01-03 00:54:24.844904 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-03 00:54:24.844907 | orchestrator | Saturday 03 January 2026 00:52:05 +0000 (0:00:01.033) 0:03:49.625 ****** 2026-01-03 00:54:24.844913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:54:24.844925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:54:24.844933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:54:24.844943 | orchestrator | 2026-01-03 00:54:24.844949 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-03 00:54:24.844955 | orchestrator | Saturday 03 January 2026 00:52:10 +0000 (0:00:04.127) 0:03:53.752 ****** 2026-01-03 00:54:24.844966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.844972 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.844999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845006 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845018 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845024 | orchestrator | 2026-01-03 00:54:24.845029 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-03 00:54:24.845035 | orchestrator | Saturday 03 January 2026 00:52:11 +0000 (0:00:01.260) 0:03:55.013 ****** 2026-01-03 00:54:24.845042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845064 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845082 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:54:24.845100 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845106 | orchestrator | 2026-01-03 00:54:24.845112 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:54:24.845118 | orchestrator | Saturday 03 January 2026 00:52:12 +0000 (0:00:01.451) 0:03:56.464 ****** 2026-01-03 00:54:24.845123 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.845129 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.845135 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.845141 | orchestrator | 2026-01-03 00:54:24.845147 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:54:24.845153 | orchestrator | Saturday 03 January 2026 00:52:14 +0000 (0:00:02.077) 0:03:58.542 ****** 2026-01-03 00:54:24.845160 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.845166 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.845173 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.845179 | orchestrator | 2026-01-03 00:54:24.845185 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-03 00:54:24.845191 | orchestrator | Saturday 03 January 2026 00:52:17 +0000 (0:00:02.536) 0:04:01.079 ****** 2026-01-03 00:54:24.845198 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-03 00:54:24.845204 | orchestrator | 2026-01-03 00:54:24.845210 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-03 00:54:24.845215 | orchestrator | Saturday 03 January 2026 00:52:18 +0000 (0:00:01.444) 0:04:02.523 ****** 2026-01-03 00:54:24.845222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845229 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845262 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845275 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845280 | orchestrator | 2026-01-03 00:54:24.845286 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-03 00:54:24.845349 | orchestrator | Saturday 03 January 2026 00:52:21 +0000 (0:00:02.213) 0:04:04.737 ****** 2026-01-03 00:54:24.845364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845370 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845382 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:54:24.845394 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845400 | orchestrator | 2026-01-03 00:54:24.845406 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-03 00:54:24.845412 | orchestrator | Saturday 03 January 2026 00:52:22 +0000 (0:00:01.530) 0:04:06.267 ****** 2026-01-03 00:54:24.845418 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845424 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845430 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845435 | orchestrator | 2026-01-03 00:54:24.845441 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:54:24.845447 | orchestrator | Saturday 03 January 2026 00:52:24 +0000 (0:00:01.810) 0:04:08.078 ****** 2026-01-03 00:54:24.845453 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.845459 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.845465 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.845471 | orchestrator | 2026-01-03 00:54:24.845480 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:54:24.845486 | orchestrator | Saturday 03 January 2026 00:52:26 +0000 (0:00:02.472) 0:04:10.550 ****** 2026-01-03 00:54:24.845499 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.845506 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.845511 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.845517 | orchestrator | 2026-01-03 00:54:24.845523 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-03 00:54:24.845529 | orchestrator | Saturday 03 January 2026 00:52:29 +0000 (0:00:02.780) 0:04:13.330 ****** 2026-01-03 00:54:24.845553 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-03 00:54:24.845560 | orchestrator | 2026-01-03 00:54:24.845566 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-03 00:54:24.845571 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:00.768) 0:04:14.099 ****** 2026-01-03 00:54:24.845578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845584 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845595 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845606 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845612 | orchestrator | 2026-01-03 00:54:24.845617 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-03 00:54:24.845623 | orchestrator | Saturday 03 January 2026 00:52:32 +0000 (0:00:01.992) 0:04:16.091 ****** 2026-01-03 00:54:24.845629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845636 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845655 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:54:24.845666 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845669 | orchestrator | 2026-01-03 00:54:24.845673 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-03 00:54:24.845677 | orchestrator | Saturday 03 January 2026 00:52:33 +0000 (0:00:01.254) 0:04:17.346 ****** 2026-01-03 00:54:24.845681 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.845684 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845702 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.845706 | orchestrator | 2026-01-03 00:54:24.845710 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:54:24.845714 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:01.387) 0:04:18.733 ****** 2026-01-03 00:54:24.845718 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.845722 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.845725 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.845729 | orchestrator | 2026-01-03 00:54:24.845733 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:54:24.845738 | orchestrator | Saturday 03 January 2026 00:52:37 +0000 (0:00:02.450) 0:04:21.184 ****** 2026-01-03 00:54:24.845744 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.845750 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.845756 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.845761 | orchestrator | 2026-01-03 00:54:24.845768 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-03 00:54:24.845773 | orchestrator | Saturday 03 January 2026 00:52:40 +0000 (0:00:02.704) 0:04:23.888 ****** 2026-01-03 00:54:24.845794 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.845800 | orchestrator | 2026-01-03 00:54:24.845806 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-03 00:54:24.845813 | orchestrator | Saturday 03 January 2026 00:52:41 +0000 (0:00:01.397) 0:04:25.286 ****** 2026-01-03 00:54:24.845820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:54:24.845827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.845839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.845877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:54:24.845881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.845886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:54:24.845901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.845920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.845924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.845940 | orchestrator | 2026-01-03 00:54:24.845944 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-03 00:54:24.845948 | orchestrator | Saturday 03 January 2026 00:52:44 +0000 (0:00:03.342) 0:04:28.628 ****** 2026-01-03 00:54:24.845956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:54:24.845972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.845976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.845988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.845992 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.845996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:54:24.846003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.846076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.846084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.846088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.846096 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:54:24.846104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:54:24.846108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.846133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:54:24.846137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:54:24.846141 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846145 | orchestrator | 2026-01-03 00:54:24.846149 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-03 00:54:24.846153 | orchestrator | Saturday 03 January 2026 00:52:45 +0000 (0:00:00.961) 0:04:29.590 ****** 2026-01-03 00:54:24.846157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846171 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.846175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846183 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:54:24.846194 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846198 | orchestrator | 2026-01-03 00:54:24.846202 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-03 00:54:24.846205 | orchestrator | Saturday 03 January 2026 00:52:47 +0000 (0:00:01.207) 0:04:30.797 ****** 2026-01-03 00:54:24.846209 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.846213 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.846217 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.846220 | orchestrator | 2026-01-03 00:54:24.846224 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-03 00:54:24.846228 | orchestrator | Saturday 03 January 2026 00:52:48 +0000 (0:00:01.223) 0:04:32.021 ****** 2026-01-03 00:54:24.846231 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.846235 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.846239 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.846243 | orchestrator | 2026-01-03 00:54:24.846246 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-03 00:54:24.846250 | orchestrator | Saturday 03 January 2026 00:52:50 +0000 (0:00:02.122) 0:04:34.143 ****** 2026-01-03 00:54:24.846254 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.846258 | orchestrator | 2026-01-03 00:54:24.846261 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-03 00:54:24.846265 | orchestrator | Saturday 03 January 2026 00:52:52 +0000 (0:00:01.567) 0:04:35.711 ****** 2026-01-03 00:54:24.846284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:54:24.846324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:54:24.846329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:54:24.846337 | orchestrator | 2026-01-03 00:54:24.846341 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-03 00:54:24.846345 | orchestrator | Saturday 03 January 2026 00:52:57 +0000 (0:00:05.138) 0:04:40.850 ****** 2026-01-03 00:54:24.846349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.846353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:54:24.846357 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.846376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.846385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:54:24.846389 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.846397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:54:24.846401 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846405 | orchestrator | 2026-01-03 00:54:24.846412 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-03 00:54:24.846416 | orchestrator | Saturday 03 January 2026 00:52:57 +0000 (0:00:00.639) 0:04:41.489 ****** 2026-01-03 00:54:24.846424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.846438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846449 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.846453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.846457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846465 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.846472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-03 00:54:24.846480 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846484 | orchestrator | 2026-01-03 00:54:24.846487 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-03 00:54:24.846491 | orchestrator | Saturday 03 January 2026 00:52:59 +0000 (0:00:01.463) 0:04:42.953 ****** 2026-01-03 00:54:24.846495 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.846499 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846502 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846506 | orchestrator | 2026-01-03 00:54:24.846510 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-03 00:54:24.846514 | orchestrator | Saturday 03 January 2026 00:52:59 +0000 (0:00:00.440) 0:04:43.394 ****** 2026-01-03 00:54:24.846518 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.846521 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.846525 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.846529 | orchestrator | 2026-01-03 00:54:24.846533 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-03 00:54:24.846540 | orchestrator | Saturday 03 January 2026 00:53:01 +0000 (0:00:01.307) 0:04:44.701 ****** 2026-01-03 00:54:24.846544 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.846547 | orchestrator | 2026-01-03 00:54:24.846551 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-03 00:54:24.846555 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:01.679) 0:04:46.381 ****** 2026-01-03 00:54:24.846572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 00:54:24.846577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.846581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 00:54:24.846598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.846621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 00:54:24.846641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.846645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.846701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.846737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:54:24.846771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.846817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846841 | orchestrator | 2026-01-03 00:54:24.846847 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-03 00:54:24.846854 | orchestrator | Saturday 03 January 2026 00:53:06 +0000 (0:00:04.214) 0:04:50.595 ****** 2026-01-03 00:54:24.846885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 00:54:24.846893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.846899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 00:54:24.846933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.846939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.846946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.846952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.846987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.846993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.847000 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.847017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.847024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.847053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 00:54:24.847060 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:54:24.847079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.847106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:54:24.847113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-03 00:54:24.847120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:54:24.847140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:54:24.847146 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847153 | orchestrator | 2026-01-03 00:54:24.847160 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-03 00:54:24.847167 | orchestrator | Saturday 03 January 2026 00:53:07 +0000 (0:00:00.837) 0:04:51.432 ****** 2026-01-03 00:54:24.847174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847212 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847250 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-03 00:54:24.847271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-03 00:54:24.847284 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847291 | orchestrator | 2026-01-03 00:54:24.847298 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-03 00:54:24.847305 | orchestrator | Saturday 03 January 2026 00:53:08 +0000 (0:00:01.012) 0:04:52.445 ****** 2026-01-03 00:54:24.847311 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847318 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847325 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847332 | orchestrator | 2026-01-03 00:54:24.847338 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-03 00:54:24.847345 | orchestrator | Saturday 03 January 2026 00:53:09 +0000 (0:00:00.770) 0:04:53.216 ****** 2026-01-03 00:54:24.847355 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847361 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847368 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847375 | orchestrator | 2026-01-03 00:54:24.847381 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-03 00:54:24.847388 | orchestrator | Saturday 03 January 2026 00:53:10 +0000 (0:00:01.258) 0:04:54.474 ****** 2026-01-03 00:54:24.847394 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.847401 | orchestrator | 2026-01-03 00:54:24.847408 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-03 00:54:24.847422 | orchestrator | Saturday 03 January 2026 00:53:12 +0000 (0:00:01.430) 0:04:55.905 ****** 2026-01-03 00:54:24.847429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:54:24.847436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:54:24.847443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:54:24.847450 | orchestrator | 2026-01-03 00:54:24.847456 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-03 00:54:24.847463 | orchestrator | Saturday 03 January 2026 00:53:14 +0000 (0:00:02.670) 0:04:58.576 ****** 2026-01-03 00:54:24.847476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:54:24.847488 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:54:24.847501 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:54:24.847514 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847520 | orchestrator | 2026-01-03 00:54:24.847527 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-03 00:54:24.847533 | orchestrator | Saturday 03 January 2026 00:53:15 +0000 (0:00:00.762) 0:04:59.338 ****** 2026-01-03 00:54:24.847540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:54:24.847547 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:54:24.847560 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:54:24.847574 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847580 | orchestrator | 2026-01-03 00:54:24.847587 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-03 00:54:24.847594 | orchestrator | Saturday 03 January 2026 00:53:16 +0000 (0:00:00.628) 0:04:59.966 ****** 2026-01-03 00:54:24.847600 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847607 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847614 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847621 | orchestrator | 2026-01-03 00:54:24.847627 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-03 00:54:24.847638 | orchestrator | Saturday 03 January 2026 00:53:16 +0000 (0:00:00.435) 0:05:00.402 ****** 2026-01-03 00:54:24.847645 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847652 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847658 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847665 | orchestrator | 2026-01-03 00:54:24.847672 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-03 00:54:24.847679 | orchestrator | Saturday 03 January 2026 00:53:18 +0000 (0:00:01.382) 0:05:01.784 ****** 2026-01-03 00:54:24.847685 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.847695 | orchestrator | 2026-01-03 00:54:24.847702 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-03 00:54:24.847708 | orchestrator | Saturday 03 January 2026 00:53:19 +0000 (0:00:01.691) 0:05:03.475 ****** 2026-01-03 00:54:24.847720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-03 00:54:24.847727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-03 00:54:24.847734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-03 00:54:24.847750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.847761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.847769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-03 00:54:24.847775 | orchestrator | 2026-01-03 00:54:24.847803 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-03 00:54:24.847809 | orchestrator | Saturday 03 January 2026 00:53:25 +0000 (0:00:05.996) 0:05:09.472 ****** 2026-01-03 00:54:24.847815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-03 00:54:24.847830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.847837 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-03 00:54:24.847853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.847860 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-03 00:54:24.847881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-03 00:54:24.847890 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.847896 | orchestrator | 2026-01-03 00:54:24.847902 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-03 00:54:24.847908 | orchestrator | Saturday 03 January 2026 00:53:26 +0000 (0:00:01.069) 0:05:10.541 ****** 2026-01-03 00:54:24.847915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.847935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.847941 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.847948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.847973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.847979 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.847986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-03 00:54:24.847998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.848004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-03 00:54:24.848010 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848016 | orchestrator | 2026-01-03 00:54:24.848026 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-03 00:54:24.848032 | orchestrator | Saturday 03 January 2026 00:53:28 +0000 (0:00:01.299) 0:05:11.841 ****** 2026-01-03 00:54:24.848038 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.848045 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.848051 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.848057 | orchestrator | 2026-01-03 00:54:24.848063 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-03 00:54:24.848069 | orchestrator | Saturday 03 January 2026 00:53:29 +0000 (0:00:01.318) 0:05:13.159 ****** 2026-01-03 00:54:24.848075 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.848084 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.848090 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.848097 | orchestrator | 2026-01-03 00:54:24.848103 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-03 00:54:24.848109 | orchestrator | Saturday 03 January 2026 00:53:31 +0000 (0:00:02.043) 0:05:15.203 ****** 2026-01-03 00:54:24.848114 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848120 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848126 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848132 | orchestrator | 2026-01-03 00:54:24.848138 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-03 00:54:24.848144 | orchestrator | Saturday 03 January 2026 00:53:31 +0000 (0:00:00.306) 0:05:15.509 ****** 2026-01-03 00:54:24.848150 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848156 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848161 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848167 | orchestrator | 2026-01-03 00:54:24.848173 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-03 00:54:24.848179 | orchestrator | Saturday 03 January 2026 00:53:32 +0000 (0:00:00.587) 0:05:16.097 ****** 2026-01-03 00:54:24.848185 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848191 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848197 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848203 | orchestrator | 2026-01-03 00:54:24.848214 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-03 00:54:24.848220 | orchestrator | Saturday 03 January 2026 00:53:32 +0000 (0:00:00.292) 0:05:16.390 ****** 2026-01-03 00:54:24.848224 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848228 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848232 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848236 | orchestrator | 2026-01-03 00:54:24.848239 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-03 00:54:24.848243 | orchestrator | Saturday 03 January 2026 00:53:33 +0000 (0:00:00.314) 0:05:16.705 ****** 2026-01-03 00:54:24.848247 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848250 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848254 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848258 | orchestrator | 2026-01-03 00:54:24.848262 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-03 00:54:24.848265 | orchestrator | Saturday 03 January 2026 00:53:33 +0000 (0:00:00.314) 0:05:17.019 ****** 2026-01-03 00:54:24.848269 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:54:24.848273 | orchestrator | 2026-01-03 00:54:24.848276 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-03 00:54:24.848280 | orchestrator | Saturday 03 January 2026 00:53:35 +0000 (0:00:01.770) 0:05:18.790 ****** 2026-01-03 00:54:24.848284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:54:24.848322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.848326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.848330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:54:24.848333 | orchestrator | 2026-01-03 00:54:24.848337 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-03 00:54:24.848341 | orchestrator | Saturday 03 January 2026 00:53:37 +0000 (0:00:02.605) 0:05:21.396 ****** 2026-01-03 00:54:24.848345 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:54:24.848350 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.848353 | orchestrator | } 2026-01-03 00:54:24.848357 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:54:24.848361 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.848365 | orchestrator | } 2026-01-03 00:54:24.848368 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:54:24.848372 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:54:24.848376 | orchestrator | } 2026-01-03 00:54:24.848379 | orchestrator | 2026-01-03 00:54:24.848383 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:54:24.848390 | orchestrator | Saturday 03 January 2026 00:53:38 +0000 (0:00:00.364) 0:05:21.760 ****** 2026-01-03 00:54:24.848396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.848404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.848408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.848412 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.848420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.848424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.848428 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:54:24.848443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:54:24.848448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:54:24.848451 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848455 | orchestrator | 2026-01-03 00:54:24.848459 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-03 00:54:24.848463 | orchestrator | Saturday 03 January 2026 00:53:39 +0000 (0:00:01.707) 0:05:23.468 ****** 2026-01-03 00:54:24.848467 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848471 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848475 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848478 | orchestrator | 2026-01-03 00:54:24.848482 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-03 00:54:24.848486 | orchestrator | Saturday 03 January 2026 00:53:40 +0000 (0:00:01.193) 0:05:24.662 ****** 2026-01-03 00:54:24.848490 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848493 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848497 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848501 | orchestrator | 2026-01-03 00:54:24.848504 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-03 00:54:24.848508 | orchestrator | Saturday 03 January 2026 00:53:41 +0000 (0:00:00.347) 0:05:25.009 ****** 2026-01-03 00:54:24.848512 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848516 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848519 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848523 | orchestrator | 2026-01-03 00:54:24.848527 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-03 00:54:24.848530 | orchestrator | Saturday 03 January 2026 00:53:42 +0000 (0:00:00.947) 0:05:25.957 ****** 2026-01-03 00:54:24.848534 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848538 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848542 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848545 | orchestrator | 2026-01-03 00:54:24.848549 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-03 00:54:24.848553 | orchestrator | Saturday 03 January 2026 00:53:43 +0000 (0:00:01.028) 0:05:26.986 ****** 2026-01-03 00:54:24.848556 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848560 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848564 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848568 | orchestrator | 2026-01-03 00:54:24.848571 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-03 00:54:24.848575 | orchestrator | Saturday 03 January 2026 00:53:44 +0000 (0:00:01.518) 0:05:28.505 ****** 2026-01-03 00:54:24.848579 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.848587 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.848590 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.848594 | orchestrator | 2026-01-03 00:54:24.848598 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-03 00:54:24.848602 | orchestrator | Saturday 03 January 2026 00:53:54 +0000 (0:00:09.472) 0:05:37.977 ****** 2026-01-03 00:54:24.848605 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848609 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848613 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848616 | orchestrator | 2026-01-03 00:54:24.848620 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-03 00:54:24.848624 | orchestrator | Saturday 03 January 2026 00:53:55 +0000 (0:00:00.764) 0:05:38.741 ****** 2026-01-03 00:54:24.848628 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.848631 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.848635 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.848639 | orchestrator | 2026-01-03 00:54:24.848643 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-03 00:54:24.848646 | orchestrator | Saturday 03 January 2026 00:54:08 +0000 (0:00:13.143) 0:05:51.884 ****** 2026-01-03 00:54:24.848650 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.848654 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.848658 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848661 | orchestrator | 2026-01-03 00:54:24.848665 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-03 00:54:24.848669 | orchestrator | Saturday 03 January 2026 00:54:09 +0000 (0:00:01.150) 0:05:53.035 ****** 2026-01-03 00:54:24.848672 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:54:24.848676 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:54:24.848737 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:54:24.848755 | orchestrator | 2026-01-03 00:54:24.848762 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-03 00:54:24.848766 | orchestrator | Saturday 03 January 2026 00:54:13 +0000 (0:00:03.831) 0:05:56.867 ****** 2026-01-03 00:54:24.848770 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848773 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848816 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848823 | orchestrator | 2026-01-03 00:54:24.848830 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-03 00:54:24.848836 | orchestrator | Saturday 03 January 2026 00:54:13 +0000 (0:00:00.359) 0:05:57.226 ****** 2026-01-03 00:54:24.848842 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848852 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848856 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848860 | orchestrator | 2026-01-03 00:54:24.848864 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-03 00:54:24.848867 | orchestrator | Saturday 03 January 2026 00:54:13 +0000 (0:00:00.377) 0:05:57.603 ****** 2026-01-03 00:54:24.848871 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848875 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848879 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848883 | orchestrator | 2026-01-03 00:54:24.848886 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-03 00:54:24.848890 | orchestrator | Saturday 03 January 2026 00:54:14 +0000 (0:00:00.660) 0:05:58.264 ****** 2026-01-03 00:54:24.848894 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848898 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848904 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848910 | orchestrator | 2026-01-03 00:54:24.848915 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-03 00:54:24.848921 | orchestrator | Saturday 03 January 2026 00:54:14 +0000 (0:00:00.357) 0:05:58.621 ****** 2026-01-03 00:54:24.848927 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848932 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848946 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848952 | orchestrator | 2026-01-03 00:54:24.848958 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-03 00:54:24.848964 | orchestrator | Saturday 03 January 2026 00:54:15 +0000 (0:00:00.354) 0:05:58.975 ****** 2026-01-03 00:54:24.848971 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:54:24.848975 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:54:24.848979 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:54:24.848983 | orchestrator | 2026-01-03 00:54:24.848987 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-03 00:54:24.848990 | orchestrator | Saturday 03 January 2026 00:54:15 +0000 (0:00:00.369) 0:05:59.344 ****** 2026-01-03 00:54:24.848994 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.848998 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.849002 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.849005 | orchestrator | 2026-01-03 00:54:24.849009 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-03 00:54:24.849013 | orchestrator | Saturday 03 January 2026 00:54:20 +0000 (0:00:05.085) 0:06:04.430 ****** 2026-01-03 00:54:24.849016 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:54:24.849020 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:54:24.849024 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:54:24.849027 | orchestrator | 2026-01-03 00:54:24.849031 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:54:24.849035 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-03 00:54:24.849040 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-03 00:54:24.849044 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-03 00:54:24.849048 | orchestrator | 2026-01-03 00:54:24.849052 | orchestrator | 2026-01-03 00:54:24.849055 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:54:24.849059 | orchestrator | Saturday 03 January 2026 00:54:21 +0000 (0:00:00.850) 0:06:05.281 ****** 2026-01-03 00:54:24.849063 | orchestrator | =============================================================================== 2026-01-03 00:54:24.849067 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.14s 2026-01-03 00:54:24.849070 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.47s 2026-01-03 00:54:24.849074 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.00s 2026-01-03 00:54:24.849078 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.60s 2026-01-03 00:54:24.849081 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.14s 2026-01-03 00:54:24.849085 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.09s 2026-01-03 00:54:24.849089 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.08s 2026-01-03 00:54:24.849092 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.80s 2026-01-03 00:54:24.849096 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.62s 2026-01-03 00:54:24.849100 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.23s 2026-01-03 00:54:24.849103 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.21s 2026-01-03 00:54:24.849107 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.13s 2026-01-03 00:54:24.849111 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.96s 2026-01-03 00:54:24.849114 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.85s 2026-01-03 00:54:24.849121 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 3.83s 2026-01-03 00:54:24.849129 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.83s 2026-01-03 00:54:24.849133 | orchestrator | haproxy-config : Configuring firewall for mariadb ----------------------- 3.74s 2026-01-03 00:54:24.849137 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.73s 2026-01-03 00:54:24.849140 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.67s 2026-01-03 00:54:24.849144 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.58s 2026-01-03 00:54:24.849150 | orchestrator | 2026-01-03 00:54:24 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:24.849154 | orchestrator | 2026-01-03 00:54:24 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:24.849158 | orchestrator | 2026-01-03 00:54:24 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:24.849162 | orchestrator | 2026-01-03 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:27.884957 | orchestrator | 2026-01-03 00:54:27 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:27.886916 | orchestrator | 2026-01-03 00:54:27 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:27.888600 | orchestrator | 2026-01-03 00:54:27 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:27.888657 | orchestrator | 2026-01-03 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:30.927119 | orchestrator | 2026-01-03 00:54:30 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:30.928882 | orchestrator | 2026-01-03 00:54:30 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:30.930952 | orchestrator | 2026-01-03 00:54:30 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:30.931812 | orchestrator | 2026-01-03 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:33.978278 | orchestrator | 2026-01-03 00:54:33 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:33.980983 | orchestrator | 2026-01-03 00:54:33 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:33.982997 | orchestrator | 2026-01-03 00:54:33 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:33.984282 | orchestrator | 2026-01-03 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:37.075826 | orchestrator | 2026-01-03 00:54:37 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:37.078723 | orchestrator | 2026-01-03 00:54:37 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:37.079547 | orchestrator | 2026-01-03 00:54:37 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:37.079582 | orchestrator | 2026-01-03 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:40.105170 | orchestrator | 2026-01-03 00:54:40 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:40.106211 | orchestrator | 2026-01-03 00:54:40 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:40.106816 | orchestrator | 2026-01-03 00:54:40 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:40.106860 | orchestrator | 2026-01-03 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:43.131028 | orchestrator | 2026-01-03 00:54:43 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:43.131133 | orchestrator | 2026-01-03 00:54:43 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:43.131465 | orchestrator | 2026-01-03 00:54:43 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:43.131509 | orchestrator | 2026-01-03 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:46.155971 | orchestrator | 2026-01-03 00:54:46 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:46.156146 | orchestrator | 2026-01-03 00:54:46 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:46.157124 | orchestrator | 2026-01-03 00:54:46 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:46.157172 | orchestrator | 2026-01-03 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:49.179948 | orchestrator | 2026-01-03 00:54:49 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:49.180792 | orchestrator | 2026-01-03 00:54:49 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:49.180844 | orchestrator | 2026-01-03 00:54:49 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:49.180854 | orchestrator | 2026-01-03 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:52.220049 | orchestrator | 2026-01-03 00:54:52 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:52.221020 | orchestrator | 2026-01-03 00:54:52 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:52.222630 | orchestrator | 2026-01-03 00:54:52 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:52.222927 | orchestrator | 2026-01-03 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:55.252037 | orchestrator | 2026-01-03 00:54:55 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:55.253345 | orchestrator | 2026-01-03 00:54:55 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:55.256085 | orchestrator | 2026-01-03 00:54:55 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:55.256126 | orchestrator | 2026-01-03 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:58.325938 | orchestrator | 2026-01-03 00:54:58 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:54:58.327922 | orchestrator | 2026-01-03 00:54:58 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:54:58.330229 | orchestrator | 2026-01-03 00:54:58 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:54:58.330504 | orchestrator | 2026-01-03 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:01.370614 | orchestrator | 2026-01-03 00:55:01 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:01.372722 | orchestrator | 2026-01-03 00:55:01 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:01.374442 | orchestrator | 2026-01-03 00:55:01 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:01.375737 | orchestrator | 2026-01-03 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:04.419846 | orchestrator | 2026-01-03 00:55:04 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:04.424196 | orchestrator | 2026-01-03 00:55:04 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:04.429499 | orchestrator | 2026-01-03 00:55:04 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:04.429872 | orchestrator | 2026-01-03 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:07.487900 | orchestrator | 2026-01-03 00:55:07 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:07.491170 | orchestrator | 2026-01-03 00:55:07 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:07.495306 | orchestrator | 2026-01-03 00:55:07 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:07.495555 | orchestrator | 2026-01-03 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:10.541294 | orchestrator | 2026-01-03 00:55:10 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:10.543501 | orchestrator | 2026-01-03 00:55:10 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:10.545306 | orchestrator | 2026-01-03 00:55:10 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:10.545355 | orchestrator | 2026-01-03 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:13.586250 | orchestrator | 2026-01-03 00:55:13 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:13.587967 | orchestrator | 2026-01-03 00:55:13 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:13.590125 | orchestrator | 2026-01-03 00:55:13 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:13.590191 | orchestrator | 2026-01-03 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:16.633785 | orchestrator | 2026-01-03 00:55:16 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:16.636040 | orchestrator | 2026-01-03 00:55:16 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:16.638263 | orchestrator | 2026-01-03 00:55:16 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:16.638323 | orchestrator | 2026-01-03 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:19.680227 | orchestrator | 2026-01-03 00:55:19 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:19.681734 | orchestrator | 2026-01-03 00:55:19 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:19.683290 | orchestrator | 2026-01-03 00:55:19 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:19.683328 | orchestrator | 2026-01-03 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:22.727267 | orchestrator | 2026-01-03 00:55:22 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:22.731179 | orchestrator | 2026-01-03 00:55:22 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:22.733397 | orchestrator | 2026-01-03 00:55:22 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:22.733451 | orchestrator | 2026-01-03 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:25.772902 | orchestrator | 2026-01-03 00:55:25 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:25.774212 | orchestrator | 2026-01-03 00:55:25 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:25.774818 | orchestrator | 2026-01-03 00:55:25 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:25.775052 | orchestrator | 2026-01-03 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:28.827385 | orchestrator | 2026-01-03 00:55:28 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:28.830697 | orchestrator | 2026-01-03 00:55:28 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:28.833768 | orchestrator | 2026-01-03 00:55:28 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:28.833923 | orchestrator | 2026-01-03 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:31.882352 | orchestrator | 2026-01-03 00:55:31 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:31.884783 | orchestrator | 2026-01-03 00:55:31 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:31.887904 | orchestrator | 2026-01-03 00:55:31 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:31.887967 | orchestrator | 2026-01-03 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:34.930581 | orchestrator | 2026-01-03 00:55:34 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:34.932765 | orchestrator | 2026-01-03 00:55:34 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:34.935200 | orchestrator | 2026-01-03 00:55:34 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:34.935327 | orchestrator | 2026-01-03 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:37.988469 | orchestrator | 2026-01-03 00:55:37 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:37.991248 | orchestrator | 2026-01-03 00:55:37 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:37.996053 | orchestrator | 2026-01-03 00:55:37 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:37.996143 | orchestrator | 2026-01-03 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:41.036987 | orchestrator | 2026-01-03 00:55:41 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:41.037334 | orchestrator | 2026-01-03 00:55:41 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:41.041196 | orchestrator | 2026-01-03 00:55:41 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:41.041274 | orchestrator | 2026-01-03 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:44.080763 | orchestrator | 2026-01-03 00:55:44 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:44.082162 | orchestrator | 2026-01-03 00:55:44 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:44.084006 | orchestrator | 2026-01-03 00:55:44 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:44.084075 | orchestrator | 2026-01-03 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:47.123002 | orchestrator | 2026-01-03 00:55:47 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:47.125311 | orchestrator | 2026-01-03 00:55:47 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:47.126683 | orchestrator | 2026-01-03 00:55:47 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:47.126863 | orchestrator | 2026-01-03 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:50.165895 | orchestrator | 2026-01-03 00:55:50 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:50.167707 | orchestrator | 2026-01-03 00:55:50 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:50.169924 | orchestrator | 2026-01-03 00:55:50 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:50.170083 | orchestrator | 2026-01-03 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:53.204508 | orchestrator | 2026-01-03 00:55:53 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:53.205769 | orchestrator | 2026-01-03 00:55:53 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:53.208152 | orchestrator | 2026-01-03 00:55:53 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:53.208341 | orchestrator | 2026-01-03 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:56.263272 | orchestrator | 2026-01-03 00:55:56 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:56.264584 | orchestrator | 2026-01-03 00:55:56 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:56.266671 | orchestrator | 2026-01-03 00:55:56 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:56.266711 | orchestrator | 2026-01-03 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:59.303550 | orchestrator | 2026-01-03 00:55:59 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:55:59.303810 | orchestrator | 2026-01-03 00:55:59 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:55:59.305015 | orchestrator | 2026-01-03 00:55:59 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:55:59.305044 | orchestrator | 2026-01-03 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:02.349798 | orchestrator | 2026-01-03 00:56:02 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:02.351954 | orchestrator | 2026-01-03 00:56:02 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:02.354893 | orchestrator | 2026-01-03 00:56:02 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:02.355015 | orchestrator | 2026-01-03 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:05.404345 | orchestrator | 2026-01-03 00:56:05 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:05.406385 | orchestrator | 2026-01-03 00:56:05 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:05.408337 | orchestrator | 2026-01-03 00:56:05 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:05.408381 | orchestrator | 2026-01-03 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:08.453608 | orchestrator | 2026-01-03 00:56:08 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:08.455489 | orchestrator | 2026-01-03 00:56:08 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:08.456935 | orchestrator | 2026-01-03 00:56:08 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:08.457062 | orchestrator | 2026-01-03 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:11.502765 | orchestrator | 2026-01-03 00:56:11 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:11.504796 | orchestrator | 2026-01-03 00:56:11 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:11.507104 | orchestrator | 2026-01-03 00:56:11 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:11.507166 | orchestrator | 2026-01-03 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:14.549793 | orchestrator | 2026-01-03 00:56:14 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:14.552239 | orchestrator | 2026-01-03 00:56:14 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:14.555082 | orchestrator | 2026-01-03 00:56:14 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:14.555148 | orchestrator | 2026-01-03 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:17.594679 | orchestrator | 2026-01-03 00:56:17 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:17.596134 | orchestrator | 2026-01-03 00:56:17 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:17.597749 | orchestrator | 2026-01-03 00:56:17 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:17.597940 | orchestrator | 2026-01-03 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:20.639134 | orchestrator | 2026-01-03 00:56:20 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:20.640517 | orchestrator | 2026-01-03 00:56:20 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:20.642233 | orchestrator | 2026-01-03 00:56:20 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:20.642555 | orchestrator | 2026-01-03 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:23.693120 | orchestrator | 2026-01-03 00:56:23 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:23.695225 | orchestrator | 2026-01-03 00:56:23 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:23.696567 | orchestrator | 2026-01-03 00:56:23 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:23.696954 | orchestrator | 2026-01-03 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:26.744558 | orchestrator | 2026-01-03 00:56:26 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:26.745756 | orchestrator | 2026-01-03 00:56:26 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:26.747149 | orchestrator | 2026-01-03 00:56:26 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:26.747179 | orchestrator | 2026-01-03 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:29.789510 | orchestrator | 2026-01-03 00:56:29 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:29.792888 | orchestrator | 2026-01-03 00:56:29 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:29.794670 | orchestrator | 2026-01-03 00:56:29 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:29.794738 | orchestrator | 2026-01-03 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:32.838357 | orchestrator | 2026-01-03 00:56:32 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:32.839655 | orchestrator | 2026-01-03 00:56:32 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:32.841182 | orchestrator | 2026-01-03 00:56:32 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:32.841228 | orchestrator | 2026-01-03 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:35.893651 | orchestrator | 2026-01-03 00:56:35 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:35.895589 | orchestrator | 2026-01-03 00:56:35 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:35.898260 | orchestrator | 2026-01-03 00:56:35 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:35.898559 | orchestrator | 2026-01-03 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:38.943972 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:38.946427 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:38.948589 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:38.948696 | orchestrator | 2026-01-03 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:42.002509 | orchestrator | 2026-01-03 00:56:42 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:42.004730 | orchestrator | 2026-01-03 00:56:42 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:42.008093 | orchestrator | 2026-01-03 00:56:42 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:42.008146 | orchestrator | 2026-01-03 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:45.059680 | orchestrator | 2026-01-03 00:56:45 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:45.062043 | orchestrator | 2026-01-03 00:56:45 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state STARTED 2026-01-03 00:56:45.064254 | orchestrator | 2026-01-03 00:56:45 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:45.064294 | orchestrator | 2026-01-03 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:48.122973 | orchestrator | 2026-01-03 00:56:48 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:48.124195 | orchestrator | 2026-01-03 00:56:48 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:56:48.131009 | orchestrator | 2026-01-03 00:56:48 | INFO  | Task 72bd3a58-88e6-438e-b884-2d97ee08d454 is in state SUCCESS 2026-01-03 00:56:48.132450 | orchestrator | 2026-01-03 00:56:48.132473 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:56:48.132477 | orchestrator | 2.16.14 2026-01-03 00:56:48.132481 | orchestrator | 2026-01-03 00:56:48.132484 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-03 00:56:48.132488 | orchestrator | 2026-01-03 00:56:48.132491 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-03 00:56:48.132495 | orchestrator | Saturday 03 January 2026 00:45:34 +0000 (0:00:00.649) 0:00:00.649 ****** 2026-01-03 00:56:48.132498 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.132512 | orchestrator | 2026-01-03 00:56:48.132516 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-03 00:56:48.132519 | orchestrator | Saturday 03 January 2026 00:45:35 +0000 (0:00:01.191) 0:00:01.840 ****** 2026-01-03 00:56:48.132522 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132526 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132529 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132532 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132535 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132538 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132542 | orchestrator | 2026-01-03 00:56:48.132545 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-03 00:56:48.132548 | orchestrator | Saturday 03 January 2026 00:45:37 +0000 (0:00:02.009) 0:00:03.850 ****** 2026-01-03 00:56:48.132551 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132555 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132558 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132561 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132564 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132567 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132570 | orchestrator | 2026-01-03 00:56:48.132574 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-03 00:56:48.132577 | orchestrator | Saturday 03 January 2026 00:45:38 +0000 (0:00:00.825) 0:00:04.676 ****** 2026-01-03 00:56:48.132580 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132583 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132586 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132589 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132593 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132596 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132599 | orchestrator | 2026-01-03 00:56:48.132602 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-03 00:56:48.132605 | orchestrator | Saturday 03 January 2026 00:45:39 +0000 (0:00:00.817) 0:00:05.493 ****** 2026-01-03 00:56:48.132608 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132611 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132614 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132617 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132620 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132623 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132626 | orchestrator | 2026-01-03 00:56:48.132630 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-03 00:56:48.132633 | orchestrator | Saturday 03 January 2026 00:45:40 +0000 (0:00:00.695) 0:00:06.189 ****** 2026-01-03 00:56:48.132636 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132639 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132642 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132645 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132648 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132651 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132654 | orchestrator | 2026-01-03 00:56:48.132663 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-03 00:56:48.132666 | orchestrator | Saturday 03 January 2026 00:45:40 +0000 (0:00:00.491) 0:00:06.680 ****** 2026-01-03 00:56:48.132669 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132672 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132675 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132678 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132681 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132684 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132687 | orchestrator | 2026-01-03 00:56:48.132690 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-03 00:56:48.132693 | orchestrator | Saturday 03 January 2026 00:45:41 +0000 (0:00:00.857) 0:00:07.537 ****** 2026-01-03 00:56:48.132696 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.132700 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.132706 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.132709 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.132712 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.132715 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.132718 | orchestrator | 2026-01-03 00:56:48.132721 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-03 00:56:48.132724 | orchestrator | Saturday 03 January 2026 00:45:42 +0000 (0:00:00.976) 0:00:08.514 ****** 2026-01-03 00:56:48.132727 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132730 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132733 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132736 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132739 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132742 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132745 | orchestrator | 2026-01-03 00:56:48.132749 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-03 00:56:48.132752 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.924) 0:00:09.438 ****** 2026-01-03 00:56:48.132755 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:56:48.132758 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.132761 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.132764 | orchestrator | 2026-01-03 00:56:48.132767 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-03 00:56:48.132805 | orchestrator | Saturday 03 January 2026 00:45:44 +0000 (0:00:00.726) 0:00:10.165 ****** 2026-01-03 00:56:48.132810 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.132813 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.132862 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.132872 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.132876 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.132898 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.132902 | orchestrator | 2026-01-03 00:56:48.132905 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-03 00:56:48.132908 | orchestrator | Saturday 03 January 2026 00:45:45 +0000 (0:00:00.891) 0:00:11.056 ****** 2026-01-03 00:56:48.132911 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:56:48.132935 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.132939 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.133050 | orchestrator | 2026-01-03 00:56:48.133055 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-03 00:56:48.133058 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:03.370) 0:00:14.427 ****** 2026-01-03 00:56:48.133061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:56:48.133064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:56:48.133067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:56:48.133070 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133073 | orchestrator | 2026-01-03 00:56:48.133076 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-03 00:56:48.133079 | orchestrator | Saturday 03 January 2026 00:45:48 +0000 (0:00:00.468) 0:00:14.895 ****** 2026-01-03 00:56:48.133083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133097 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133100 | orchestrator | 2026-01-03 00:56:48.133103 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-03 00:56:48.133106 | orchestrator | Saturday 03 January 2026 00:45:49 +0000 (0:00:00.835) 0:00:15.731 ****** 2026-01-03 00:56:48.133113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133123 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133126 | orchestrator | 2026-01-03 00:56:48.133129 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-03 00:56:48.133133 | orchestrator | Saturday 03 January 2026 00:45:50 +0000 (0:00:00.539) 0:00:16.270 ****** 2026-01-03 00:56:48.133139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-03 00:45:45.857353', 'end': '2026-01-03 00:45:46.171040', 'delta': '0:00:00.313687', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-03 00:45:47.099271', 'end': '2026-01-03 00:45:47.365951', 'delta': '0:00:00.266680', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-03 00:45:48.102197', 'end': '2026-01-03 00:45:48.400617', 'delta': '0:00:00.298420', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.133153 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133157 | orchestrator | 2026-01-03 00:56:48.133160 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-03 00:56:48.133163 | orchestrator | Saturday 03 January 2026 00:45:50 +0000 (0:00:00.279) 0:00:16.550 ****** 2026-01-03 00:56:48.133166 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.133169 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.133172 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.133175 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.133178 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.133181 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.133184 | orchestrator | 2026-01-03 00:56:48.133188 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-03 00:56:48.133191 | orchestrator | Saturday 03 January 2026 00:45:52 +0000 (0:00:01.433) 0:00:17.983 ****** 2026-01-03 00:56:48.133194 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.133197 | orchestrator | 2026-01-03 00:56:48.133200 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-03 00:56:48.133205 | orchestrator | Saturday 03 January 2026 00:45:52 +0000 (0:00:00.653) 0:00:18.636 ****** 2026-01-03 00:56:48.133208 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133211 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133214 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133217 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133220 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133223 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133226 | orchestrator | 2026-01-03 00:56:48.133229 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-03 00:56:48.133232 | orchestrator | Saturday 03 January 2026 00:45:54 +0000 (0:00:01.608) 0:00:20.245 ****** 2026-01-03 00:56:48.133236 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133239 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133242 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133245 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133248 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133251 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133254 | orchestrator | 2026-01-03 00:56:48.133257 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:56:48.133260 | orchestrator | Saturday 03 January 2026 00:45:56 +0000 (0:00:01.785) 0:00:22.031 ****** 2026-01-03 00:56:48.133263 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133266 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133269 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133272 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133275 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133278 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133281 | orchestrator | 2026-01-03 00:56:48.133285 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-03 00:56:48.133288 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:01.269) 0:00:23.301 ****** 2026-01-03 00:56:48.133291 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133294 | orchestrator | 2026-01-03 00:56:48.133297 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-03 00:56:48.133300 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:00.172) 0:00:23.474 ****** 2026-01-03 00:56:48.133303 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133308 | orchestrator | 2026-01-03 00:56:48.133311 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:56:48.133314 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:00.342) 0:00:23.816 ****** 2026-01-03 00:56:48.133317 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133321 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133324 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133329 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133332 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133335 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133338 | orchestrator | 2026-01-03 00:56:48.133341 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-03 00:56:48.133345 | orchestrator | Saturday 03 January 2026 00:45:58 +0000 (0:00:00.774) 0:00:24.591 ****** 2026-01-03 00:56:48.133348 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133351 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133354 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133357 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133360 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133363 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133366 | orchestrator | 2026-01-03 00:56:48.133369 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-03 00:56:48.133372 | orchestrator | Saturday 03 January 2026 00:45:59 +0000 (0:00:00.809) 0:00:25.400 ****** 2026-01-03 00:56:48.133375 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133378 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133381 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133384 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133387 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133390 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133393 | orchestrator | 2026-01-03 00:56:48.133396 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-03 00:56:48.133399 | orchestrator | Saturday 03 January 2026 00:46:00 +0000 (0:00:00.765) 0:00:26.166 ****** 2026-01-03 00:56:48.133402 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133405 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133430 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133434 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133437 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133440 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133443 | orchestrator | 2026-01-03 00:56:48.133446 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-03 00:56:48.133449 | orchestrator | Saturday 03 January 2026 00:46:01 +0000 (0:00:01.150) 0:00:27.316 ****** 2026-01-03 00:56:48.133452 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133455 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133458 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133461 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133465 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133468 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133471 | orchestrator | 2026-01-03 00:56:48.133474 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-03 00:56:48.133571 | orchestrator | Saturday 03 January 2026 00:46:01 +0000 (0:00:00.530) 0:00:27.846 ****** 2026-01-03 00:56:48.133575 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133578 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133581 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133584 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133587 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133590 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133593 | orchestrator | 2026-01-03 00:56:48.133596 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-03 00:56:48.133600 | orchestrator | Saturday 03 January 2026 00:46:02 +0000 (0:00:00.920) 0:00:28.766 ****** 2026-01-03 00:56:48.133605 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133608 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.133612 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.133617 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.133620 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.133623 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.133645 | orchestrator | 2026-01-03 00:56:48.133689 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-03 00:56:48.133692 | orchestrator | Saturday 03 January 2026 00:46:03 +0000 (0:00:00.670) 0:00:29.436 ****** 2026-01-03 00:56:48.133696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7', 'dm-uuid-LVM-E0SLy0xxpfD6sTvVCIDPbqNc4GMCOCUptP94SpiYGE5vofYYlylLirpwuCLL2IIP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898', 'dm-uuid-LVM-V8Qk00zkomK0NL3Q4cqrm8tfvImB27p4tpR6HKkJ5iLRmvpxnNpbZjzV0CtdmwQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRWCS0-dcrg-y2sh-Oroo-Kq1m-UIyS-kyZoBl', 'scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338', 'scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DfzhTZ-p50D-CgcH-gVNP-0T9N-kPcG-1dOPE9', 'scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743', 'scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a', 'dm-uuid-LVM-NGHa1wUn8V350RlbQkJyBkV1rAqUU52v6nrYcdeahLIqO19Dbf8R3enPFwK8NgU9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25', 'scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e', 'dm-uuid-LVM-x0tcY9oMSmUzULFEhVgjmU1edzjHsa9qH2UeuFA78MnOtpX4Ju5rXgC9oBuuBBHY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133948 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.133952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.133986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt6dL-CeDZ-3hms-lPDz-CD85-4F34-gRqb46', 'scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04', 'scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.133995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EoW2yZ-4Rbk-tIRq-J6CD-zI2A-8Kl3-8ohoyA', 'scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4', 'scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5', 'scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f', 'dm-uuid-LVM-L3YoutWQquMEZSSYtKpK6iMm17YuKmdhxZDFI0w81VqyoVae0ofnrDxH7gZJ3y2m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282', 'dm-uuid-LVM-XigZQOTdcftuIUPTt9fZjIvpXyb1vJf2OL88b8i2lUQSeWGs78yAg3dsKPUiWBn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3YK4j-1nNb-nWx2-VZ0W-SrCJ-Bt6D-C16i1e', 'scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0', 'scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134321 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.134324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lBuccY-K5SU-jpvV-AeFo-xoB9-n0WZ-HqUcnJ', 'scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c', 'scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd', 'scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134351 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.134354 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.134375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134544 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.134548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:56:48.134580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:56:48.134596 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.134600 | orchestrator | 2026-01-03 00:56:48.134603 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-03 00:56:48.134606 | orchestrator | Saturday 03 January 2026 00:46:04 +0000 (0:00:00.991) 0:00:30.428 ****** 2026-01-03 00:56:48.134610 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a', 'dm-uuid-LVM-NGHa1wUn8V350RlbQkJyBkV1rAqUU52v6nrYcdeahLIqO19Dbf8R3enPFwK8NgU9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e', 'dm-uuid-LVM-x0tcY9oMSmUzULFEhVgjmU1edzjHsa9qH2UeuFA78MnOtpX4Ju5rXgC9oBuuBBHY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7', 'dm-uuid-LVM-E0SLy0xxpfD6sTvVCIDPbqNc4GMCOCUptP94SpiYGE5vofYYlylLirpwuCLL2IIP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898', 'dm-uuid-LVM-V8Qk00zkomK0NL3Q4cqrm8tfvImB27p4tpR6HKkJ5iLRmvpxnNpbZjzV0CtdmwQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134715 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134891 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f', 'dm-uuid-LVM-L3YoutWQquMEZSSYtKpK6iMm17YuKmdhxZDFI0w81VqyoVae0ofnrDxH7gZJ3y2m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282', 'dm-uuid-LVM-XigZQOTdcftuIUPTt9fZjIvpXyb1vJf2OL88b8i2lUQSeWGs78yAg3dsKPUiWBn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134964 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134990 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.134997 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRWCS0-dcrg-y2sh-Oroo-Kq1m-UIyS-kyZoBl', 'scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338', 'scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135039 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DfzhTZ-p50D-CgcH-gVNP-0T9N-kPcG-1dOPE9', 'scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743', 'scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135064 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt6dL-CeDZ-3hms-lPDz-CD85-4F34-gRqb46', 'scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04', 'scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25', 'scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135087 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.135090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2bf45dd0-3b3b-4bfc-8f32-ca0729857a93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EoW2yZ-4Rbk-tIRq-J6CD-zI2A-8Kl3-8ohoyA', 'scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4', 'scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135135 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135146 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5', 'scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135196 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.135204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135209 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135462 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135480 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.135486 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135491 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135515 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135546 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135554 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_480c0bd9-4479-4e9b-bee3-e1a1c18f46c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135585 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135602 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.135629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135633 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3YK4j-1nNb-nWx2-VZ0W-SrCJ-Bt6D-C16i1e', 'scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0', 'scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135674 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135708 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bbcd537-85f1-4819-90f6-f7f08a06c207-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135716 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135725 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.135730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lBuccY-K5SU-jpvV-AeFo-xoB9-n0WZ-HqUcnJ', 'scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c', 'scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd', 'scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:56:48.135744 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.135749 | orchestrator | 2026-01-03 00:56:48.135784 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-03 00:56:48.135792 | orchestrator | Saturday 03 January 2026 00:46:05 +0000 (0:00:01.235) 0:00:31.664 ****** 2026-01-03 00:56:48.135797 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.135802 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.135807 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.135811 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.135816 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.135901 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.135909 | orchestrator | 2026-01-03 00:56:48.135914 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-03 00:56:48.135919 | orchestrator | Saturday 03 January 2026 00:46:06 +0000 (0:00:01.063) 0:00:32.727 ****** 2026-01-03 00:56:48.135925 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.135930 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.135936 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.135964 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.135970 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.135975 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.135980 | orchestrator | 2026-01-03 00:56:48.135993 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:56:48.136043 | orchestrator | Saturday 03 January 2026 00:46:07 +0000 (0:00:00.467) 0:00:33.195 ****** 2026-01-03 00:56:48.136058 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136064 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136069 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136074 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136079 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136084 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136090 | orchestrator | 2026-01-03 00:56:48.136093 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:56:48.136096 | orchestrator | Saturday 03 January 2026 00:46:08 +0000 (0:00:00.730) 0:00:33.925 ****** 2026-01-03 00:56:48.136099 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136102 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136105 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136109 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136112 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136114 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136117 | orchestrator | 2026-01-03 00:56:48.136121 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:56:48.136124 | orchestrator | Saturday 03 January 2026 00:46:08 +0000 (0:00:00.704) 0:00:34.630 ****** 2026-01-03 00:56:48.136127 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136130 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136133 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136136 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136139 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136142 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136145 | orchestrator | 2026-01-03 00:56:48.136148 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:56:48.136152 | orchestrator | Saturday 03 January 2026 00:46:09 +0000 (0:00:01.009) 0:00:35.639 ****** 2026-01-03 00:56:48.136155 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136158 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136161 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136164 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136167 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136170 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136173 | orchestrator | 2026-01-03 00:56:48.136176 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-03 00:56:48.136181 | orchestrator | Saturday 03 January 2026 00:46:10 +0000 (0:00:00.845) 0:00:36.485 ****** 2026-01-03 00:56:48.136184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-03 00:56:48.136187 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-03 00:56:48.136190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:56:48.136193 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-03 00:56:48.136196 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-03 00:56:48.136199 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-03 00:56:48.136202 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-03 00:56:48.136206 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-03 00:56:48.136209 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-03 00:56:48.136212 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-03 00:56:48.136215 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-03 00:56:48.136218 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-03 00:56:48.136221 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-03 00:56:48.136224 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-03 00:56:48.136227 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-03 00:56:48.136233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-03 00:56:48.136236 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-03 00:56:48.136239 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-03 00:56:48.136242 | orchestrator | 2026-01-03 00:56:48.136245 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-03 00:56:48.136248 | orchestrator | Saturday 03 January 2026 00:46:12 +0000 (0:00:01.787) 0:00:38.272 ****** 2026-01-03 00:56:48.136251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:56:48.136254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:56:48.136257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:56:48.136261 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136264 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:56:48.136267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:56:48.136270 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:56:48.136273 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:56:48.136318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:56:48.136325 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:56:48.136331 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:56:48.136340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:56:48.136344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:56:48.136353 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136356 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-03 00:56:48.136359 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-03 00:56:48.136362 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-03 00:56:48.136366 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-03 00:56:48.136372 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-03 00:56:48.136375 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-03 00:56:48.136378 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136381 | orchestrator | 2026-01-03 00:56:48.136384 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-03 00:56:48.136387 | orchestrator | Saturday 03 January 2026 00:46:13 +0000 (0:00:00.688) 0:00:38.961 ****** 2026-01-03 00:56:48.136390 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136393 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136397 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136400 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.136403 | orchestrator | 2026-01-03 00:56:48.136406 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:56:48.136410 | orchestrator | Saturday 03 January 2026 00:46:13 +0000 (0:00:00.926) 0:00:39.887 ****** 2026-01-03 00:56:48.136413 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136416 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136420 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136423 | orchestrator | 2026-01-03 00:56:48.136426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:56:48.136430 | orchestrator | Saturday 03 January 2026 00:46:14 +0000 (0:00:00.317) 0:00:40.204 ****** 2026-01-03 00:56:48.136434 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136437 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136444 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136448 | orchestrator | 2026-01-03 00:56:48.136451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:56:48.136455 | orchestrator | Saturday 03 January 2026 00:46:14 +0000 (0:00:00.582) 0:00:40.787 ****** 2026-01-03 00:56:48.136459 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136462 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136466 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136469 | orchestrator | 2026-01-03 00:56:48.136473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:56:48.136476 | orchestrator | Saturday 03 January 2026 00:46:15 +0000 (0:00:00.747) 0:00:41.534 ****** 2026-01-03 00:56:48.136480 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.136486 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.136490 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.136493 | orchestrator | 2026-01-03 00:56:48.136497 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:56:48.136501 | orchestrator | Saturday 03 January 2026 00:46:16 +0000 (0:00:00.576) 0:00:42.110 ****** 2026-01-03 00:56:48.136505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.136508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.136512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.136516 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136519 | orchestrator | 2026-01-03 00:56:48.136523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:56:48.136527 | orchestrator | Saturday 03 January 2026 00:46:16 +0000 (0:00:00.309) 0:00:42.420 ****** 2026-01-03 00:56:48.136530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.136534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.136537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.136541 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136544 | orchestrator | 2026-01-03 00:56:48.136548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:56:48.136552 | orchestrator | Saturday 03 January 2026 00:46:16 +0000 (0:00:00.318) 0:00:42.739 ****** 2026-01-03 00:56:48.136556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.136560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.136563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.136567 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136571 | orchestrator | 2026-01-03 00:56:48.136574 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:56:48.136578 | orchestrator | Saturday 03 January 2026 00:46:17 +0000 (0:00:00.386) 0:00:43.125 ****** 2026-01-03 00:56:48.136582 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.136585 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.136589 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.136593 | orchestrator | 2026-01-03 00:56:48.136596 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:56:48.136599 | orchestrator | Saturday 03 January 2026 00:46:17 +0000 (0:00:00.670) 0:00:43.796 ****** 2026-01-03 00:56:48.136602 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:56:48.136606 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:56:48.136621 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:56:48.136624 | orchestrator | 2026-01-03 00:56:48.136628 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-03 00:56:48.136632 | orchestrator | Saturday 03 January 2026 00:46:19 +0000 (0:00:01.435) 0:00:45.231 ****** 2026-01-03 00:56:48.136637 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:56:48.136642 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.136650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.136655 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:56:48.136659 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:56:48.136664 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:56:48.136669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:56:48.136674 | orchestrator | 2026-01-03 00:56:48.136679 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-03 00:56:48.136684 | orchestrator | Saturday 03 January 2026 00:46:20 +0000 (0:00:00.797) 0:00:46.028 ****** 2026-01-03 00:56:48.136689 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:56:48.136694 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.136699 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.136705 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:56:48.136709 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:56:48.136712 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:56:48.136715 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:56:48.136718 | orchestrator | 2026-01-03 00:56:48.136721 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.136724 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:01.623) 0:00:47.651 ****** 2026-01-03 00:56:48.136728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.136732 | orchestrator | 2026-01-03 00:56:48.136735 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.136738 | orchestrator | Saturday 03 January 2026 00:46:22 +0000 (0:00:01.181) 0:00:48.833 ****** 2026-01-03 00:56:48.136742 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.136747 | orchestrator | 2026-01-03 00:56:48.136752 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.136760 | orchestrator | Saturday 03 January 2026 00:46:24 +0000 (0:00:01.461) 0:00:50.295 ****** 2026-01-03 00:56:48.136765 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136769 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136775 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136779 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.136785 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.136790 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.136795 | orchestrator | 2026-01-03 00:56:48.136800 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.136806 | orchestrator | Saturday 03 January 2026 00:46:25 +0000 (0:00:01.295) 0:00:51.591 ****** 2026-01-03 00:56:48.136811 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136816 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136848 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.136851 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136855 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.136858 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.136861 | orchestrator | 2026-01-03 00:56:48.136865 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.136871 | orchestrator | Saturday 03 January 2026 00:46:26 +0000 (0:00:01.241) 0:00:52.832 ****** 2026-01-03 00:56:48.136880 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136886 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.136892 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.136898 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136903 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.136909 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136914 | orchestrator | 2026-01-03 00:56:48.136920 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.136924 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.796) 0:00:53.629 ****** 2026-01-03 00:56:48.136927 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.136930 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.136934 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.136937 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.136940 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.136943 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.136946 | orchestrator | 2026-01-03 00:56:48.136949 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.136952 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:00.880) 0:00:54.510 ****** 2026-01-03 00:56:48.136955 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.136959 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.136964 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.136969 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.136974 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137001 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137008 | orchestrator | 2026-01-03 00:56:48.137013 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.137019 | orchestrator | Saturday 03 January 2026 00:46:29 +0000 (0:00:01.101) 0:00:55.611 ****** 2026-01-03 00:56:48.137024 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137029 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137034 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137037 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137040 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137043 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137046 | orchestrator | 2026-01-03 00:56:48.137050 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.137053 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:00.658) 0:00:56.270 ****** 2026-01-03 00:56:48.137056 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137059 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137062 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137065 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137068 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137071 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137074 | orchestrator | 2026-01-03 00:56:48.137077 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.137080 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:00.579) 0:00:56.849 ****** 2026-01-03 00:56:48.137083 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137087 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137090 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137093 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137096 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137099 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137102 | orchestrator | 2026-01-03 00:56:48.137105 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.137108 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:00.872) 0:00:57.721 ****** 2026-01-03 00:56:48.137111 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137114 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137117 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137120 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137123 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137129 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137133 | orchestrator | 2026-01-03 00:56:48.137136 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.137139 | orchestrator | Saturday 03 January 2026 00:46:33 +0000 (0:00:01.573) 0:00:59.295 ****** 2026-01-03 00:56:48.137142 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137145 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137148 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137151 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137154 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137157 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137160 | orchestrator | 2026-01-03 00:56:48.137164 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.137167 | orchestrator | Saturday 03 January 2026 00:46:34 +0000 (0:00:00.989) 0:01:00.285 ****** 2026-01-03 00:56:48.137170 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137173 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137176 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137179 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137182 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137185 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137188 | orchestrator | 2026-01-03 00:56:48.137193 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.137197 | orchestrator | Saturday 03 January 2026 00:46:35 +0000 (0:00:01.213) 0:01:01.498 ****** 2026-01-03 00:56:48.137200 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137203 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137206 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137209 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137212 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137215 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137218 | orchestrator | 2026-01-03 00:56:48.137221 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.137225 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:00.517) 0:01:02.015 ****** 2026-01-03 00:56:48.137228 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137231 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137234 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137237 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137240 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137243 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137246 | orchestrator | 2026-01-03 00:56:48.137249 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.137252 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:00.775) 0:01:02.790 ****** 2026-01-03 00:56:48.137255 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137258 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137262 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137265 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137268 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137271 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137274 | orchestrator | 2026-01-03 00:56:48.137277 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.137280 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:00.806) 0:01:03.597 ****** 2026-01-03 00:56:48.137283 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137286 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137289 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137292 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137295 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137298 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137301 | orchestrator | 2026-01-03 00:56:48.137305 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.137308 | orchestrator | Saturday 03 January 2026 00:46:38 +0000 (0:00:00.952) 0:01:04.550 ****** 2026-01-03 00:56:48.137313 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137316 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137319 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137323 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137337 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137341 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137344 | orchestrator | 2026-01-03 00:56:48.137347 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.137350 | orchestrator | Saturday 03 January 2026 00:46:39 +0000 (0:00:01.328) 0:01:05.879 ****** 2026-01-03 00:56:48.137353 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137356 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137359 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137362 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137365 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137369 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137372 | orchestrator | 2026-01-03 00:56:48.137375 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.137378 | orchestrator | Saturday 03 January 2026 00:46:40 +0000 (0:00:00.593) 0:01:06.472 ****** 2026-01-03 00:56:48.137381 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137384 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137387 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137390 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137393 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137396 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137399 | orchestrator | 2026-01-03 00:56:48.137402 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.137405 | orchestrator | Saturday 03 January 2026 00:46:41 +0000 (0:00:00.877) 0:01:07.350 ****** 2026-01-03 00:56:48.137408 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137411 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137414 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137417 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137420 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137423 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137426 | orchestrator | 2026-01-03 00:56:48.137430 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-03 00:56:48.137433 | orchestrator | Saturday 03 January 2026 00:46:43 +0000 (0:00:01.552) 0:01:08.903 ****** 2026-01-03 00:56:48.137436 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.137439 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.137442 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.137446 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.137449 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.137452 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.137455 | orchestrator | 2026-01-03 00:56:48.137458 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-03 00:56:48.137461 | orchestrator | Saturday 03 January 2026 00:46:45 +0000 (0:00:02.411) 0:01:11.314 ****** 2026-01-03 00:56:48.137464 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.137467 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.137470 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.137473 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.137476 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.137479 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.137483 | orchestrator | 2026-01-03 00:56:48.137488 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-03 00:56:48.137493 | orchestrator | Saturday 03 January 2026 00:46:47 +0000 (0:00:02.272) 0:01:13.586 ****** 2026-01-03 00:56:48.137499 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.137507 | orchestrator | 2026-01-03 00:56:48.137514 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-03 00:56:48.137520 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:01.138) 0:01:14.725 ****** 2026-01-03 00:56:48.137525 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137530 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137536 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137541 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137547 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137550 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137553 | orchestrator | 2026-01-03 00:56:48.137556 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-03 00:56:48.137559 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:00.547) 0:01:15.272 ****** 2026-01-03 00:56:48.137562 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137565 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137568 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137571 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137574 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137577 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137580 | orchestrator | 2026-01-03 00:56:48.137583 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-03 00:56:48.137586 | orchestrator | Saturday 03 January 2026 00:46:50 +0000 (0:00:00.857) 0:01:16.130 ****** 2026-01-03 00:56:48.137589 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137592 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137596 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137599 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137602 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137605 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137608 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137611 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137614 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:56:48.137617 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137631 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137635 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:56:48.137638 | orchestrator | 2026-01-03 00:56:48.137641 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-03 00:56:48.137644 | orchestrator | Saturday 03 January 2026 00:46:51 +0000 (0:00:01.384) 0:01:17.514 ****** 2026-01-03 00:56:48.137647 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.137650 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.137653 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.137657 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.137660 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.137663 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.137666 | orchestrator | 2026-01-03 00:56:48.137669 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-03 00:56:48.137672 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:01.500) 0:01:19.014 ****** 2026-01-03 00:56:48.137675 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137678 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137681 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137684 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137690 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137693 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137696 | orchestrator | 2026-01-03 00:56:48.137699 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-03 00:56:48.137702 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:00.735) 0:01:19.750 ****** 2026-01-03 00:56:48.137705 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137708 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137711 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137714 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137717 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137720 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137723 | orchestrator | 2026-01-03 00:56:48.137726 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-03 00:56:48.137730 | orchestrator | Saturday 03 January 2026 00:46:54 +0000 (0:00:01.022) 0:01:20.772 ****** 2026-01-03 00:56:48.137733 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137736 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137739 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137742 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137745 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137748 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137751 | orchestrator | 2026-01-03 00:56:48.137754 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-03 00:56:48.137757 | orchestrator | Saturday 03 January 2026 00:46:55 +0000 (0:00:00.592) 0:01:21.365 ****** 2026-01-03 00:56:48.137761 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.137764 | orchestrator | 2026-01-03 00:56:48.137767 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-03 00:56:48.137770 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:01.258) 0:01:22.624 ****** 2026-01-03 00:56:48.137773 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.137776 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.137779 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.137786 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.137789 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.137792 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.137795 | orchestrator | 2026-01-03 00:56:48.137798 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-03 00:56:48.137801 | orchestrator | Saturday 03 January 2026 00:48:20 +0000 (0:01:23.968) 0:02:46.592 ****** 2026-01-03 00:56:48.137804 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137808 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137811 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137814 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137817 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137832 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137835 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137838 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137842 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137845 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137848 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137851 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137854 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137860 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137863 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137866 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137869 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137874 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137879 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137886 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137908 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:56:48.137914 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:56:48.137919 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:56:48.137924 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137929 | orchestrator | 2026-01-03 00:56:48.137933 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-03 00:56:48.137938 | orchestrator | Saturday 03 January 2026 00:48:21 +0000 (0:00:00.515) 0:02:47.107 ****** 2026-01-03 00:56:48.137943 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137948 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.137953 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.137958 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.137963 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.137968 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.137973 | orchestrator | 2026-01-03 00:56:48.137978 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-03 00:56:48.137983 | orchestrator | Saturday 03 January 2026 00:48:22 +0000 (0:00:00.933) 0:02:48.041 ****** 2026-01-03 00:56:48.137987 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.137990 | orchestrator | 2026-01-03 00:56:48.137993 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-03 00:56:48.137996 | orchestrator | Saturday 03 January 2026 00:48:22 +0000 (0:00:00.128) 0:02:48.170 ****** 2026-01-03 00:56:48.137999 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138004 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138009 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138051 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138056 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138061 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138065 | orchestrator | 2026-01-03 00:56:48.138070 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-03 00:56:48.138074 | orchestrator | Saturday 03 January 2026 00:48:23 +0000 (0:00:00.752) 0:02:48.923 ****** 2026-01-03 00:56:48.138079 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138084 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138088 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138092 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138097 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138102 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138107 | orchestrator | 2026-01-03 00:56:48.138112 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-03 00:56:48.138117 | orchestrator | Saturday 03 January 2026 00:48:23 +0000 (0:00:00.838) 0:02:49.761 ****** 2026-01-03 00:56:48.138122 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138128 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138133 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138138 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138144 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138149 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138154 | orchestrator | 2026-01-03 00:56:48.138164 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-03 00:56:48.138170 | orchestrator | Saturday 03 January 2026 00:48:24 +0000 (0:00:00.598) 0:02:50.360 ****** 2026-01-03 00:56:48.138173 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.138177 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.138180 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.138183 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.138186 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.138191 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.138195 | orchestrator | 2026-01-03 00:56:48.138198 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-03 00:56:48.138201 | orchestrator | Saturday 03 January 2026 00:48:28 +0000 (0:00:03.616) 0:02:53.977 ****** 2026-01-03 00:56:48.138204 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.138207 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.138210 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.138213 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.138216 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.138219 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.138222 | orchestrator | 2026-01-03 00:56:48.138225 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-03 00:56:48.138228 | orchestrator | Saturday 03 January 2026 00:48:28 +0000 (0:00:00.588) 0:02:54.565 ****** 2026-01-03 00:56:48.138232 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.138236 | orchestrator | 2026-01-03 00:56:48.138239 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-03 00:56:48.138242 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:01.061) 0:02:55.627 ****** 2026-01-03 00:56:48.138245 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138248 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138251 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138254 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138258 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138263 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138270 | orchestrator | 2026-01-03 00:56:48.138277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-03 00:56:48.138281 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:00.626) 0:02:56.254 ****** 2026-01-03 00:56:48.138286 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138290 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138295 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138300 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138304 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138309 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138314 | orchestrator | 2026-01-03 00:56:48.138320 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-03 00:56:48.138325 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:00.565) 0:02:56.819 ****** 2026-01-03 00:56:48.138330 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138334 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138354 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138358 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138362 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138365 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138368 | orchestrator | 2026-01-03 00:56:48.138371 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-03 00:56:48.138374 | orchestrator | Saturday 03 January 2026 00:48:31 +0000 (0:00:00.681) 0:02:57.500 ****** 2026-01-03 00:56:48.138377 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138380 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138383 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138386 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138393 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138396 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138399 | orchestrator | 2026-01-03 00:56:48.138402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-03 00:56:48.138405 | orchestrator | Saturday 03 January 2026 00:48:32 +0000 (0:00:00.644) 0:02:58.145 ****** 2026-01-03 00:56:48.138409 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138412 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138415 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138418 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138421 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138424 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138427 | orchestrator | 2026-01-03 00:56:48.138430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-03 00:56:48.138433 | orchestrator | Saturday 03 January 2026 00:48:32 +0000 (0:00:00.677) 0:02:58.822 ****** 2026-01-03 00:56:48.138437 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138440 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138443 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138446 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138449 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138454 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138459 | orchestrator | 2026-01-03 00:56:48.138464 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-03 00:56:48.138469 | orchestrator | Saturday 03 January 2026 00:48:33 +0000 (0:00:00.569) 0:02:59.392 ****** 2026-01-03 00:56:48.138474 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138479 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138483 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138488 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138492 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138497 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138502 | orchestrator | 2026-01-03 00:56:48.138507 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-03 00:56:48.138513 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:00.617) 0:03:00.009 ****** 2026-01-03 00:56:48.138518 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.138523 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.138528 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.138533 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.138538 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.138541 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.138544 | orchestrator | 2026-01-03 00:56:48.138547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-03 00:56:48.138551 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:00.584) 0:03:00.593 ****** 2026-01-03 00:56:48.138554 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.138557 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.138560 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.138565 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.138568 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.138571 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.138574 | orchestrator | 2026-01-03 00:56:48.138578 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-03 00:56:48.138581 | orchestrator | Saturday 03 January 2026 00:48:35 +0000 (0:00:01.112) 0:03:01.706 ****** 2026-01-03 00:56:48.138584 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.138587 | orchestrator | 2026-01-03 00:56:48.138591 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-03 00:56:48.138594 | orchestrator | Saturday 03 January 2026 00:48:36 +0000 (0:00:00.941) 0:03:02.647 ****** 2026-01-03 00:56:48.138599 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-03 00:56:48.138603 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-03 00:56:48.138606 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-03 00:56:48.138609 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138612 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-03 00:56:48.138615 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-03 00:56:48.138618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138621 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138624 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138627 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-03 00:56:48.138630 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138636 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138639 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138645 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138648 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-03 00:56:48.138651 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138667 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138674 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138677 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138680 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-03 00:56:48.138683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138686 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138690 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138693 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138696 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138699 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-03 00:56:48.138702 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138708 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138711 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138714 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138717 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-03 00:56:48.138720 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138723 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138729 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138732 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-03 00:56:48.138741 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138746 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138751 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138760 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138765 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138776 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-03 00:56:48.138782 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138788 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138794 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138798 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138801 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138807 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138810 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:56:48.138813 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138816 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138831 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138834 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138840 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:56:48.138844 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138847 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138850 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138862 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138865 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138868 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:56:48.138871 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138874 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138877 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138880 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138883 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:56:48.138906 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-03 00:56:48.138914 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-03 00:56:48.138918 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138923 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138933 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-03 00:56:48.138938 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:56:48.138942 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-03 00:56:48.138951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138956 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138961 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-03 00:56:48.138965 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:56:48.138970 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-03 00:56:48.138975 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-03 00:56:48.138980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:56:48.138985 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-03 00:56:48.138990 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-03 00:56:48.138995 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-03 00:56:48.139000 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-03 00:56:48.139005 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-03 00:56:48.139008 | orchestrator | 2026-01-03 00:56:48.139012 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-03 00:56:48.139017 | orchestrator | Saturday 03 January 2026 00:48:43 +0000 (0:00:06.508) 0:03:09.156 ****** 2026-01-03 00:56:48.139022 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139027 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139032 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139037 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.139042 | orchestrator | 2026-01-03 00:56:48.139047 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-03 00:56:48.139052 | orchestrator | Saturday 03 January 2026 00:48:43 +0000 (0:00:00.709) 0:03:09.865 ****** 2026-01-03 00:56:48.139058 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139063 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139069 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139074 | orchestrator | 2026-01-03 00:56:48.139083 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-03 00:56:48.139087 | orchestrator | Saturday 03 January 2026 00:48:44 +0000 (0:00:00.744) 0:03:10.610 ****** 2026-01-03 00:56:48.139090 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139093 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139096 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139099 | orchestrator | 2026-01-03 00:56:48.139102 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-03 00:56:48.139105 | orchestrator | Saturday 03 January 2026 00:48:45 +0000 (0:00:01.122) 0:03:11.733 ****** 2026-01-03 00:56:48.139108 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139111 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139114 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139117 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139120 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139123 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139126 | orchestrator | 2026-01-03 00:56:48.139129 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-03 00:56:48.139132 | orchestrator | Saturday 03 January 2026 00:48:46 +0000 (0:00:00.544) 0:03:12.278 ****** 2026-01-03 00:56:48.139138 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139141 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139144 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139148 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139151 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139154 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139157 | orchestrator | 2026-01-03 00:56:48.139160 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-03 00:56:48.139163 | orchestrator | Saturday 03 January 2026 00:48:47 +0000 (0:00:00.747) 0:03:13.026 ****** 2026-01-03 00:56:48.139166 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139169 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139172 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139175 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139178 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139181 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139184 | orchestrator | 2026-01-03 00:56:48.139202 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-03 00:56:48.139206 | orchestrator | Saturday 03 January 2026 00:48:47 +0000 (0:00:00.449) 0:03:13.475 ****** 2026-01-03 00:56:48.139209 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139212 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139215 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139218 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139221 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139224 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139228 | orchestrator | 2026-01-03 00:56:48.139231 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-03 00:56:48.139234 | orchestrator | Saturday 03 January 2026 00:48:48 +0000 (0:00:00.745) 0:03:14.221 ****** 2026-01-03 00:56:48.139237 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139240 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139243 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139246 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139249 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139252 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139255 | orchestrator | 2026-01-03 00:56:48.139258 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-03 00:56:48.139261 | orchestrator | Saturday 03 January 2026 00:48:48 +0000 (0:00:00.474) 0:03:14.695 ****** 2026-01-03 00:56:48.139265 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139268 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139271 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139274 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139277 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139280 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139283 | orchestrator | 2026-01-03 00:56:48.139286 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-03 00:56:48.139289 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:00.766) 0:03:15.461 ****** 2026-01-03 00:56:48.139292 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139295 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139298 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139301 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139304 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139307 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139310 | orchestrator | 2026-01-03 00:56:48.139313 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-03 00:56:48.139316 | orchestrator | Saturday 03 January 2026 00:48:50 +0000 (0:00:00.468) 0:03:15.930 ****** 2026-01-03 00:56:48.139319 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139350 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139354 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139357 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139360 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139363 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139366 | orchestrator | 2026-01-03 00:56:48.139369 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-03 00:56:48.139372 | orchestrator | Saturday 03 January 2026 00:48:50 +0000 (0:00:00.631) 0:03:16.561 ****** 2026-01-03 00:56:48.139375 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139378 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139381 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139384 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139387 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139392 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139396 | orchestrator | 2026-01-03 00:56:48.139399 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-03 00:56:48.139402 | orchestrator | Saturday 03 January 2026 00:48:55 +0000 (0:00:04.503) 0:03:21.065 ****** 2026-01-03 00:56:48.139405 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139408 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139411 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139414 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139417 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139420 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139423 | orchestrator | 2026-01-03 00:56:48.139426 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-03 00:56:48.139431 | orchestrator | Saturday 03 January 2026 00:48:55 +0000 (0:00:00.782) 0:03:21.848 ****** 2026-01-03 00:56:48.139437 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139443 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139450 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139456 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139461 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139466 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139471 | orchestrator | 2026-01-03 00:56:48.139477 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-03 00:56:48.139482 | orchestrator | Saturday 03 January 2026 00:48:56 +0000 (0:00:00.485) 0:03:22.333 ****** 2026-01-03 00:56:48.139487 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139493 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139498 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139504 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139510 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139516 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139522 | orchestrator | 2026-01-03 00:56:48.139528 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-03 00:56:48.139532 | orchestrator | Saturday 03 January 2026 00:48:56 +0000 (0:00:00.552) 0:03:22.886 ****** 2026-01-03 00:56:48.139535 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139539 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139542 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.139546 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139569 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139577 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139581 | orchestrator | 2026-01-03 00:56:48.139586 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-03 00:56:48.139591 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:00.466) 0:03:23.353 ****** 2026-01-03 00:56:48.139607 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-03 00:56:48.139614 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-03 00:56:48.139619 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139623 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-03 00:56:48.139629 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-03 00:56:48.139633 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-03 00:56:48.139638 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-03 00:56:48.139643 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139648 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139653 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139657 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139668 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139673 | orchestrator | 2026-01-03 00:56:48.139678 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-03 00:56:48.139683 | orchestrator | Saturday 03 January 2026 00:48:58 +0000 (0:00:00.673) 0:03:24.026 ****** 2026-01-03 00:56:48.139688 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139693 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139698 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139703 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139708 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139714 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139718 | orchestrator | 2026-01-03 00:56:48.139721 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-03 00:56:48.139724 | orchestrator | Saturday 03 January 2026 00:48:58 +0000 (0:00:00.499) 0:03:24.526 ****** 2026-01-03 00:56:48.139727 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139730 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139733 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139736 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139739 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139742 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139745 | orchestrator | 2026-01-03 00:56:48.139748 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:56:48.139752 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.649) 0:03:25.176 ****** 2026-01-03 00:56:48.139757 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139760 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139763 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139766 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139769 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139772 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139775 | orchestrator | 2026-01-03 00:56:48.139779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:56:48.139783 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.571) 0:03:25.747 ****** 2026-01-03 00:56:48.139787 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139794 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139801 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139805 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139811 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139815 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139849 | orchestrator | 2026-01-03 00:56:48.139856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:56:48.139883 | orchestrator | Saturday 03 January 2026 00:49:00 +0000 (0:00:00.668) 0:03:26.416 ****** 2026-01-03 00:56:48.139887 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139890 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.139893 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.139896 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139899 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139902 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139905 | orchestrator | 2026-01-03 00:56:48.139908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:56:48.139911 | orchestrator | Saturday 03 January 2026 00:49:01 +0000 (0:00:00.589) 0:03:27.005 ****** 2026-01-03 00:56:48.139915 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.139918 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.139921 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.139924 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.139927 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.139930 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.139933 | orchestrator | 2026-01-03 00:56:48.139936 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:56:48.139939 | orchestrator | Saturday 03 January 2026 00:49:01 +0000 (0:00:00.876) 0:03:27.881 ****** 2026-01-03 00:56:48.139942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.139945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.139948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.139951 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139954 | orchestrator | 2026-01-03 00:56:48.139957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:56:48.139961 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.428) 0:03:28.310 ****** 2026-01-03 00:56:48.139964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.139967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.139970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.139973 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.139976 | orchestrator | 2026-01-03 00:56:48.139979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:56:48.139982 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.410) 0:03:28.721 ****** 2026-01-03 00:56:48.139985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.139988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.139991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.139994 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140001 | orchestrator | 2026-01-03 00:56:48.140004 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:56:48.140007 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.428) 0:03:29.149 ****** 2026-01-03 00:56:48.140010 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.140013 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.140016 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.140019 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140022 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140025 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140028 | orchestrator | 2026-01-03 00:56:48.140031 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:56:48.140037 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.597) 0:03:29.747 ****** 2026-01-03 00:56:48.140040 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:56:48.140043 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:56:48.140046 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:56:48.140049 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-03 00:56:48.140052 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140055 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-03 00:56:48.140058 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140061 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-03 00:56:48.140064 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140067 | orchestrator | 2026-01-03 00:56:48.140070 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-03 00:56:48.140073 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:01.936) 0:03:31.683 ****** 2026-01-03 00:56:48.140076 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.140079 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.140082 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.140085 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.140089 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.140092 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.140095 | orchestrator | 2026-01-03 00:56:48.140098 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.140101 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:02.035) 0:03:33.719 ****** 2026-01-03 00:56:48.140104 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.140107 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.140110 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.140113 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.140116 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.140119 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.140122 | orchestrator | 2026-01-03 00:56:48.140125 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-03 00:56:48.140128 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.822) 0:03:34.542 ****** 2026-01-03 00:56:48.140131 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140134 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140137 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.140144 | orchestrator | 2026-01-03 00:56:48.140147 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-03 00:56:48.140160 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.805) 0:03:35.347 ****** 2026-01-03 00:56:48.140164 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140167 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140170 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140173 | orchestrator | 2026-01-03 00:56:48.140176 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-03 00:56:48.140179 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.312) 0:03:35.659 ****** 2026-01-03 00:56:48.140185 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.140188 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.140191 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.140194 | orchestrator | 2026-01-03 00:56:48.140197 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-03 00:56:48.140201 | orchestrator | Saturday 03 January 2026 00:49:11 +0000 (0:00:01.308) 0:03:36.968 ****** 2026-01-03 00:56:48.140204 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:56:48.140207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:56:48.140210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:56:48.140213 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140216 | orchestrator | 2026-01-03 00:56:48.140219 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-03 00:56:48.140222 | orchestrator | Saturday 03 January 2026 00:49:11 +0000 (0:00:00.848) 0:03:37.817 ****** 2026-01-03 00:56:48.140225 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140228 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140231 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140234 | orchestrator | 2026-01-03 00:56:48.140237 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-03 00:56:48.140240 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:00.247) 0:03:38.064 ****** 2026-01-03 00:56:48.140243 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140246 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140249 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.140256 | orchestrator | 2026-01-03 00:56:48.140259 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-03 00:56:48.140262 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:00.701) 0:03:38.766 ****** 2026-01-03 00:56:48.140265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.140268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.140271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.140274 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140277 | orchestrator | 2026-01-03 00:56:48.140280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-03 00:56:48.140283 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.283) 0:03:39.050 ****** 2026-01-03 00:56:48.140286 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140289 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140292 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140295 | orchestrator | 2026-01-03 00:56:48.140298 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-03 00:56:48.140301 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.312) 0:03:39.363 ****** 2026-01-03 00:56:48.140304 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140307 | orchestrator | 2026-01-03 00:56:48.140312 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-03 00:56:48.140315 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.194) 0:03:39.557 ****** 2026-01-03 00:56:48.140318 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140321 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140324 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140327 | orchestrator | 2026-01-03 00:56:48.140330 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-03 00:56:48.140333 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.285) 0:03:39.843 ****** 2026-01-03 00:56:48.140336 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140340 | orchestrator | 2026-01-03 00:56:48.140343 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-03 00:56:48.140348 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:00.171) 0:03:40.014 ****** 2026-01-03 00:56:48.140351 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140354 | orchestrator | 2026-01-03 00:56:48.140357 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-03 00:56:48.140360 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:00.153) 0:03:40.168 ****** 2026-01-03 00:56:48.140363 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140366 | orchestrator | 2026-01-03 00:56:48.140369 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-03 00:56:48.140372 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:00.091) 0:03:40.259 ****** 2026-01-03 00:56:48.140375 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140378 | orchestrator | 2026-01-03 00:56:48.140381 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-03 00:56:48.140385 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:00.487) 0:03:40.747 ****** 2026-01-03 00:56:48.140388 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140391 | orchestrator | 2026-01-03 00:56:48.140394 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-03 00:56:48.140397 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:00.187) 0:03:40.934 ****** 2026-01-03 00:56:48.140400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.140403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.140406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.140409 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140412 | orchestrator | 2026-01-03 00:56:48.140415 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-03 00:56:48.140427 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:00.356) 0:03:41.291 ****** 2026-01-03 00:56:48.140431 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140434 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140437 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140440 | orchestrator | 2026-01-03 00:56:48.140443 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-03 00:56:48.140446 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:00.273) 0:03:41.565 ****** 2026-01-03 00:56:48.140449 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140452 | orchestrator | 2026-01-03 00:56:48.140455 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-03 00:56:48.140458 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:00.179) 0:03:41.744 ****** 2026-01-03 00:56:48.140461 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140465 | orchestrator | 2026-01-03 00:56:48.140468 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-03 00:56:48.140471 | orchestrator | Saturday 03 January 2026 00:49:16 +0000 (0:00:00.201) 0:03:41.946 ****** 2026-01-03 00:56:48.140474 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140477 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140480 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.140486 | orchestrator | 2026-01-03 00:56:48.140489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-03 00:56:48.140492 | orchestrator | Saturday 03 January 2026 00:49:16 +0000 (0:00:00.808) 0:03:42.755 ****** 2026-01-03 00:56:48.140495 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.140498 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.140501 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.140504 | orchestrator | 2026-01-03 00:56:48.140507 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-03 00:56:48.140511 | orchestrator | Saturday 03 January 2026 00:49:17 +0000 (0:00:00.280) 0:03:43.035 ****** 2026-01-03 00:56:48.140516 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.140519 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.140522 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.140525 | orchestrator | 2026-01-03 00:56:48.140528 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-03 00:56:48.140531 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:01.079) 0:03:44.115 ****** 2026-01-03 00:56:48.140534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.140537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.140540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.140543 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140546 | orchestrator | 2026-01-03 00:56:48.140551 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-03 00:56:48.140557 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:00.679) 0:03:44.794 ****** 2026-01-03 00:56:48.140563 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.140571 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.140576 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.140581 | orchestrator | 2026-01-03 00:56:48.140586 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-03 00:56:48.140591 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:00.393) 0:03:45.188 ****** 2026-01-03 00:56:48.140599 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140604 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140609 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140615 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.140620 | orchestrator | 2026-01-03 00:56:48.140626 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-03 00:56:48.140632 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:00.682) 0:03:45.870 ****** 2026-01-03 00:56:48.140638 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.140643 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.140649 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.140654 | orchestrator | 2026-01-03 00:56:48.140657 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-03 00:56:48.140660 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:00.401) 0:03:46.272 ****** 2026-01-03 00:56:48.140663 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.140666 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.140669 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.140672 | orchestrator | 2026-01-03 00:56:48.140675 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-03 00:56:48.140678 | orchestrator | Saturday 03 January 2026 00:49:21 +0000 (0:00:01.166) 0:03:47.438 ****** 2026-01-03 00:56:48.140681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.140684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.140687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.140690 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140693 | orchestrator | 2026-01-03 00:56:48.140696 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-03 00:56:48.140700 | orchestrator | Saturday 03 January 2026 00:49:22 +0000 (0:00:00.624) 0:03:48.062 ****** 2026-01-03 00:56:48.140703 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.140706 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.140709 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.140712 | orchestrator | 2026-01-03 00:56:48.140715 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-03 00:56:48.140718 | orchestrator | Saturday 03 January 2026 00:49:22 +0000 (0:00:00.297) 0:03:48.360 ****** 2026-01-03 00:56:48.140721 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140727 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140730 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140733 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140736 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140752 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140756 | orchestrator | 2026-01-03 00:56:48.140759 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-03 00:56:48.140762 | orchestrator | Saturday 03 January 2026 00:49:23 +0000 (0:00:00.801) 0:03:49.161 ****** 2026-01-03 00:56:48.140765 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.140768 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.140771 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.140774 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.140777 | orchestrator | 2026-01-03 00:56:48.140780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-03 00:56:48.140784 | orchestrator | Saturday 03 January 2026 00:49:24 +0000 (0:00:00.767) 0:03:49.928 ****** 2026-01-03 00:56:48.140787 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140790 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140793 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140796 | orchestrator | 2026-01-03 00:56:48.140799 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-03 00:56:48.140802 | orchestrator | Saturday 03 January 2026 00:49:24 +0000 (0:00:00.659) 0:03:50.588 ****** 2026-01-03 00:56:48.140805 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.140808 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.140811 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.140814 | orchestrator | 2026-01-03 00:56:48.140817 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-03 00:56:48.140831 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:01.100) 0:03:51.689 ****** 2026-01-03 00:56:48.140835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:56:48.140838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:56:48.140842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:56:48.140845 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140848 | orchestrator | 2026-01-03 00:56:48.140851 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-03 00:56:48.140854 | orchestrator | Saturday 03 January 2026 00:49:26 +0000 (0:00:00.580) 0:03:52.269 ****** 2026-01-03 00:56:48.140857 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140860 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140863 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140866 | orchestrator | 2026-01-03 00:56:48.140869 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-03 00:56:48.140873 | orchestrator | 2026-01-03 00:56:48.140876 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.140879 | orchestrator | Saturday 03 January 2026 00:49:27 +0000 (0:00:00.654) 0:03:52.923 ****** 2026-01-03 00:56:48.140882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.140886 | orchestrator | 2026-01-03 00:56:48.140889 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.140892 | orchestrator | Saturday 03 January 2026 00:49:27 +0000 (0:00:00.506) 0:03:53.429 ****** 2026-01-03 00:56:48.140895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.140898 | orchestrator | 2026-01-03 00:56:48.140903 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.140907 | orchestrator | Saturday 03 January 2026 00:49:27 +0000 (0:00:00.410) 0:03:53.840 ****** 2026-01-03 00:56:48.140913 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140916 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140919 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140923 | orchestrator | 2026-01-03 00:56:48.140926 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.140929 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:00.886) 0:03:54.726 ****** 2026-01-03 00:56:48.140932 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140935 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140938 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140941 | orchestrator | 2026-01-03 00:56:48.140944 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.140947 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.296) 0:03:55.023 ****** 2026-01-03 00:56:48.140950 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140954 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140957 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140960 | orchestrator | 2026-01-03 00:56:48.140963 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.140966 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.310) 0:03:55.333 ****** 2026-01-03 00:56:48.140969 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.140972 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.140975 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.140978 | orchestrator | 2026-01-03 00:56:48.140981 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.140984 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.259) 0:03:55.592 ****** 2026-01-03 00:56:48.140988 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.140991 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.140994 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.140997 | orchestrator | 2026-01-03 00:56:48.141000 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.141003 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.968) 0:03:56.560 ****** 2026-01-03 00:56:48.141006 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141009 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141012 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141015 | orchestrator | 2026-01-03 00:56:48.141019 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.141022 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.274) 0:03:56.835 ****** 2026-01-03 00:56:48.141034 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141038 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141041 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141044 | orchestrator | 2026-01-03 00:56:48.141047 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.141051 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:00.266) 0:03:57.101 ****** 2026-01-03 00:56:48.141054 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141057 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141060 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141063 | orchestrator | 2026-01-03 00:56:48.141066 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.141069 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:00.795) 0:03:57.897 ****** 2026-01-03 00:56:48.141072 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141075 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141078 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141081 | orchestrator | 2026-01-03 00:56:48.141085 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.141088 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:00.719) 0:03:58.616 ****** 2026-01-03 00:56:48.141091 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141094 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141100 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141103 | orchestrator | 2026-01-03 00:56:48.141106 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.141110 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.502) 0:03:59.119 ****** 2026-01-03 00:56:48.141113 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141116 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141119 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141122 | orchestrator | 2026-01-03 00:56:48.141125 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.141128 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.317) 0:03:59.437 ****** 2026-01-03 00:56:48.141131 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141134 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141137 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141140 | orchestrator | 2026-01-03 00:56:48.141144 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.141147 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.264) 0:03:59.701 ****** 2026-01-03 00:56:48.141150 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141153 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141156 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141159 | orchestrator | 2026-01-03 00:56:48.141162 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.141165 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.277) 0:03:59.979 ****** 2026-01-03 00:56:48.141168 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141171 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141175 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141178 | orchestrator | 2026-01-03 00:56:48.141181 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.141184 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.568) 0:04:00.548 ****** 2026-01-03 00:56:48.141187 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141190 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141193 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141196 | orchestrator | 2026-01-03 00:56:48.141199 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.141204 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.309) 0:04:00.857 ****** 2026-01-03 00:56:48.141207 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141210 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141213 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141216 | orchestrator | 2026-01-03 00:56:48.141219 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.141222 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.270) 0:04:01.128 ****** 2026-01-03 00:56:48.141226 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141229 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141232 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141238 | orchestrator | 2026-01-03 00:56:48.141244 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.141249 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.467) 0:04:01.595 ****** 2026-01-03 00:56:48.141254 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141260 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141265 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141270 | orchestrator | 2026-01-03 00:56:48.141275 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.141280 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.483) 0:04:02.078 ****** 2026-01-03 00:56:48.141285 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141291 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141296 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141302 | orchestrator | 2026-01-03 00:56:48.141311 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:56:48.141314 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.563) 0:04:02.642 ****** 2026-01-03 00:56:48.141317 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141320 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141323 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141326 | orchestrator | 2026-01-03 00:56:48.141329 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-03 00:56:48.141332 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.370) 0:04:03.012 ****** 2026-01-03 00:56:48.141335 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.141338 | orchestrator | 2026-01-03 00:56:48.141341 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-03 00:56:48.141344 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.589) 0:04:03.602 ****** 2026-01-03 00:56:48.141347 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141350 | orchestrator | 2026-01-03 00:56:48.141365 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-03 00:56:48.141369 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.125) 0:04:03.727 ****** 2026-01-03 00:56:48.141372 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:56:48.141375 | orchestrator | 2026-01-03 00:56:48.141378 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-03 00:56:48.141381 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.884) 0:04:04.612 ****** 2026-01-03 00:56:48.141384 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141387 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141390 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141393 | orchestrator | 2026-01-03 00:56:48.141396 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-03 00:56:48.141399 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.271) 0:04:04.883 ****** 2026-01-03 00:56:48.141403 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141406 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141409 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141412 | orchestrator | 2026-01-03 00:56:48.141415 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-03 00:56:48.141418 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:00.301) 0:04:05.185 ****** 2026-01-03 00:56:48.141421 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141424 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141427 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141430 | orchestrator | 2026-01-03 00:56:48.141433 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-03 00:56:48.141436 | orchestrator | Saturday 03 January 2026 00:49:40 +0000 (0:00:01.400) 0:04:06.586 ****** 2026-01-03 00:56:48.141439 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141442 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141445 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141448 | orchestrator | 2026-01-03 00:56:48.141451 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-03 00:56:48.141454 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:00.774) 0:04:07.361 ****** 2026-01-03 00:56:48.141457 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141460 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141463 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141466 | orchestrator | 2026-01-03 00:56:48.141469 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-03 00:56:48.141473 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:00.591) 0:04:07.952 ****** 2026-01-03 00:56:48.141477 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141482 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141487 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141496 | orchestrator | 2026-01-03 00:56:48.141500 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-03 00:56:48.141505 | orchestrator | Saturday 03 January 2026 00:49:43 +0000 (0:00:01.009) 0:04:08.962 ****** 2026-01-03 00:56:48.141509 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141514 | orchestrator | 2026-01-03 00:56:48.141518 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-03 00:56:48.141524 | orchestrator | Saturday 03 January 2026 00:49:44 +0000 (0:00:01.711) 0:04:10.673 ****** 2026-01-03 00:56:48.141529 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141534 | orchestrator | 2026-01-03 00:56:48.141539 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-03 00:56:48.141543 | orchestrator | Saturday 03 January 2026 00:49:45 +0000 (0:00:00.991) 0:04:11.665 ****** 2026-01-03 00:56:48.141551 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.141556 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.141560 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.141565 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:56:48.141569 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:56:48.141574 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-03 00:56:48.141580 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:56:48.141584 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-03 00:56:48.141588 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-03 00:56:48.141593 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-03 00:56:48.141598 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:56:48.141604 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-03 00:56:48.141609 | orchestrator | 2026-01-03 00:56:48.141613 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-03 00:56:48.141619 | orchestrator | Saturday 03 January 2026 00:49:49 +0000 (0:00:03.813) 0:04:15.479 ****** 2026-01-03 00:56:48.141623 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141628 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141633 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141638 | orchestrator | 2026-01-03 00:56:48.141643 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-03 00:56:48.141648 | orchestrator | Saturday 03 January 2026 00:49:50 +0000 (0:00:01.195) 0:04:16.674 ****** 2026-01-03 00:56:48.141653 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141658 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141663 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141668 | orchestrator | 2026-01-03 00:56:48.141674 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-03 00:56:48.141679 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:00.318) 0:04:16.993 ****** 2026-01-03 00:56:48.141683 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.141687 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.141690 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.141693 | orchestrator | 2026-01-03 00:56:48.141696 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-03 00:56:48.141699 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:00.309) 0:04:17.302 ****** 2026-01-03 00:56:48.141702 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141724 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141730 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141734 | orchestrator | 2026-01-03 00:56:48.141739 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-03 00:56:48.141744 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:01.736) 0:04:19.038 ****** 2026-01-03 00:56:48.141749 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141759 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141764 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141769 | orchestrator | 2026-01-03 00:56:48.141774 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-03 00:56:48.141778 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:01.206) 0:04:20.244 ****** 2026-01-03 00:56:48.141783 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141788 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141792 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141798 | orchestrator | 2026-01-03 00:56:48.141803 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-03 00:56:48.141806 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:00.313) 0:04:20.558 ****** 2026-01-03 00:56:48.141809 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-03 00:56:48.141813 | orchestrator | 2026-01-03 00:56:48.141816 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-03 00:56:48.141831 | orchestrator | Saturday 03 January 2026 00:49:55 +0000 (0:00:01.116) 0:04:21.674 ****** 2026-01-03 00:56:48.141835 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141838 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141841 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141844 | orchestrator | 2026-01-03 00:56:48.141847 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-03 00:56:48.141850 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:00.420) 0:04:22.095 ****** 2026-01-03 00:56:48.141853 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.141856 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.141859 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.141863 | orchestrator | 2026-01-03 00:56:48.141866 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-03 00:56:48.141869 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:00.431) 0:04:22.527 ****** 2026-01-03 00:56:48.141872 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.141875 | orchestrator | 2026-01-03 00:56:48.141879 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-03 00:56:48.141882 | orchestrator | Saturday 03 January 2026 00:49:57 +0000 (0:00:01.057) 0:04:23.584 ****** 2026-01-03 00:56:48.141885 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141888 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141891 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141894 | orchestrator | 2026-01-03 00:56:48.141897 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-03 00:56:48.141900 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:02.036) 0:04:25.621 ****** 2026-01-03 00:56:48.141903 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141906 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141910 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141913 | orchestrator | 2026-01-03 00:56:48.141919 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-03 00:56:48.141922 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:01.329) 0:04:26.950 ****** 2026-01-03 00:56:48.141925 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141928 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141931 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141934 | orchestrator | 2026-01-03 00:56:48.141937 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-03 00:56:48.141940 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:01.980) 0:04:28.931 ****** 2026-01-03 00:56:48.141943 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.141947 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.141950 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.141954 | orchestrator | 2026-01-03 00:56:48.141963 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-03 00:56:48.141968 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:02.094) 0:04:31.025 ****** 2026-01-03 00:56:48.141973 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.141978 | orchestrator | 2026-01-03 00:56:48.141983 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-03 00:56:48.141989 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:00.477) 0:04:31.503 ****** 2026-01-03 00:56:48.141994 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-03 00:56:48.142000 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142005 | orchestrator | 2026-01-03 00:56:48.142011 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-03 00:56:48.142040 | orchestrator | Saturday 03 January 2026 00:50:27 +0000 (0:00:21.949) 0:04:53.452 ****** 2026-01-03 00:56:48.142046 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142051 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142054 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142057 | orchestrator | 2026-01-03 00:56:48.142060 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-03 00:56:48.142063 | orchestrator | Saturday 03 January 2026 00:50:37 +0000 (0:00:09.530) 0:05:02.983 ****** 2026-01-03 00:56:48.142066 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142070 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142073 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142076 | orchestrator | 2026-01-03 00:56:48.142079 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-03 00:56:48.142098 | orchestrator | Saturday 03 January 2026 00:50:37 +0000 (0:00:00.511) 0:05:03.494 ****** 2026-01-03 00:56:48.142103 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-03 00:56:48.142108 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-03 00:56:48.142112 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-03 00:56:48.142116 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-03 00:56:48.142119 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-03 00:56:48.142126 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ea228ef1316d5abc395beacf4845b668c79ce487'}])  2026-01-03 00:56:48.142133 | orchestrator | 2026-01-03 00:56:48.142136 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.142140 | orchestrator | Saturday 03 January 2026 00:50:51 +0000 (0:00:14.343) 0:05:17.837 ****** 2026-01-03 00:56:48.142143 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142146 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142149 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142152 | orchestrator | 2026-01-03 00:56:48.142155 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-03 00:56:48.142158 | orchestrator | Saturday 03 January 2026 00:50:52 +0000 (0:00:00.313) 0:05:18.150 ****** 2026-01-03 00:56:48.142161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.142164 | orchestrator | 2026-01-03 00:56:48.142167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-03 00:56:48.142170 | orchestrator | Saturday 03 January 2026 00:50:52 +0000 (0:00:00.602) 0:05:18.753 ****** 2026-01-03 00:56:48.142173 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142176 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142180 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142183 | orchestrator | 2026-01-03 00:56:48.142186 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-03 00:56:48.142189 | orchestrator | Saturday 03 January 2026 00:50:53 +0000 (0:00:00.328) 0:05:19.081 ****** 2026-01-03 00:56:48.142192 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142195 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142198 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142201 | orchestrator | 2026-01-03 00:56:48.142204 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-03 00:56:48.142207 | orchestrator | Saturday 03 January 2026 00:50:53 +0000 (0:00:00.288) 0:05:19.370 ****** 2026-01-03 00:56:48.142210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:56:48.142213 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:56:48.142216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:56:48.142220 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142223 | orchestrator | 2026-01-03 00:56:48.142226 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-03 00:56:48.142229 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:00.698) 0:05:20.069 ****** 2026-01-03 00:56:48.142232 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142235 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142247 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142251 | orchestrator | 2026-01-03 00:56:48.142254 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-03 00:56:48.142257 | orchestrator | 2026-01-03 00:56:48.142260 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.142263 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:00.637) 0:05:20.706 ****** 2026-01-03 00:56:48.142266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.142270 | orchestrator | 2026-01-03 00:56:48.142273 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.142276 | orchestrator | Saturday 03 January 2026 00:50:55 +0000 (0:00:00.504) 0:05:21.211 ****** 2026-01-03 00:56:48.142279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-01-03 00:56:48.142284 | orchestrator | 2026-01-03 00:56:48.142288 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.142291 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.980) 0:05:22.191 ****** 2026-01-03 00:56:48.142294 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142297 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142300 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142303 | orchestrator | 2026-01-03 00:56:48.142306 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.142309 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:00.744) 0:05:22.936 ****** 2026-01-03 00:56:48.142312 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142315 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142318 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142321 | orchestrator | 2026-01-03 00:56:48.142324 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.142327 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:00.268) 0:05:23.205 ****** 2026-01-03 00:56:48.142330 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142333 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142336 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142341 | orchestrator | 2026-01-03 00:56:48.142346 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.142351 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:00.377) 0:05:23.583 ****** 2026-01-03 00:56:48.142356 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142361 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142366 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142370 | orchestrator | 2026-01-03 00:56:48.142375 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.142380 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:00.265) 0:05:23.848 ****** 2026-01-03 00:56:48.142385 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142391 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142396 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142402 | orchestrator | 2026-01-03 00:56:48.142407 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.142412 | orchestrator | Saturday 03 January 2026 00:50:58 +0000 (0:00:00.752) 0:05:24.601 ****** 2026-01-03 00:56:48.142418 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142424 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142430 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142435 | orchestrator | 2026-01-03 00:56:48.142443 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.142448 | orchestrator | Saturday 03 January 2026 00:50:58 +0000 (0:00:00.262) 0:05:24.863 ****** 2026-01-03 00:56:48.142453 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142459 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142464 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142470 | orchestrator | 2026-01-03 00:56:48.142476 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.142482 | orchestrator | Saturday 03 January 2026 00:50:59 +0000 (0:00:00.256) 0:05:25.119 ****** 2026-01-03 00:56:48.142488 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142494 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142498 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142501 | orchestrator | 2026-01-03 00:56:48.142504 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.142508 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:01.052) 0:05:26.172 ****** 2026-01-03 00:56:48.142513 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142518 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142522 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142526 | orchestrator | 2026-01-03 00:56:48.142531 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.142539 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:00.705) 0:05:26.878 ****** 2026-01-03 00:56:48.142544 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142549 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142553 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142558 | orchestrator | 2026-01-03 00:56:48.142562 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.142567 | orchestrator | Saturday 03 January 2026 00:51:01 +0000 (0:00:00.247) 0:05:27.126 ****** 2026-01-03 00:56:48.142572 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142577 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142583 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142588 | orchestrator | 2026-01-03 00:56:48.142593 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.142599 | orchestrator | Saturday 03 January 2026 00:51:01 +0000 (0:00:00.299) 0:05:27.425 ****** 2026-01-03 00:56:48.142603 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142609 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142612 | orchestrator | 2026-01-03 00:56:48.142615 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.142634 | orchestrator | Saturday 03 January 2026 00:51:01 +0000 (0:00:00.364) 0:05:27.790 ****** 2026-01-03 00:56:48.142638 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142641 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142644 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142647 | orchestrator | 2026-01-03 00:56:48.142651 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.142657 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:00.217) 0:05:28.007 ****** 2026-01-03 00:56:48.142662 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142668 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142673 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142677 | orchestrator | 2026-01-03 00:56:48.142682 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.142688 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:00.243) 0:05:28.251 ****** 2026-01-03 00:56:48.142694 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142698 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142704 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142709 | orchestrator | 2026-01-03 00:56:48.142715 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.142720 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:00.235) 0:05:28.486 ****** 2026-01-03 00:56:48.142725 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142730 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142736 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142739 | orchestrator | 2026-01-03 00:56:48.142742 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.142745 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:00.392) 0:05:28.878 ****** 2026-01-03 00:56:48.142748 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142751 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142755 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142759 | orchestrator | 2026-01-03 00:56:48.142764 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.142769 | orchestrator | Saturday 03 January 2026 00:51:03 +0000 (0:00:00.286) 0:05:29.165 ****** 2026-01-03 00:56:48.142774 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142779 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142784 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142789 | orchestrator | 2026-01-03 00:56:48.142794 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.142799 | orchestrator | Saturday 03 January 2026 00:51:03 +0000 (0:00:00.292) 0:05:29.458 ****** 2026-01-03 00:56:48.142809 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142814 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142849 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142854 | orchestrator | 2026-01-03 00:56:48.142857 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:56:48.142860 | orchestrator | Saturday 03 January 2026 00:51:04 +0000 (0:00:00.607) 0:05:30.065 ****** 2026-01-03 00:56:48.142863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:56:48.142867 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.142870 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.142873 | orchestrator | 2026-01-03 00:56:48.142876 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-03 00:56:48.142879 | orchestrator | Saturday 03 January 2026 00:51:04 +0000 (0:00:00.559) 0:05:30.625 ****** 2026-01-03 00:56:48.142885 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.142888 | orchestrator | 2026-01-03 00:56:48.142891 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-03 00:56:48.142894 | orchestrator | Saturday 03 January 2026 00:51:05 +0000 (0:00:00.475) 0:05:31.101 ****** 2026-01-03 00:56:48.142897 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.142901 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.142904 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.142907 | orchestrator | 2026-01-03 00:56:48.142910 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-03 00:56:48.142914 | orchestrator | Saturday 03 January 2026 00:51:05 +0000 (0:00:00.743) 0:05:31.844 ****** 2026-01-03 00:56:48.142918 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.142923 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.142928 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.142933 | orchestrator | 2026-01-03 00:56:48.142938 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-03 00:56:48.142943 | orchestrator | Saturday 03 January 2026 00:51:06 +0000 (0:00:00.416) 0:05:32.261 ****** 2026-01-03 00:56:48.142947 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.142953 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.142957 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.142962 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-03 00:56:48.142967 | orchestrator | 2026-01-03 00:56:48.142972 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-03 00:56:48.142977 | orchestrator | Saturday 03 January 2026 00:51:16 +0000 (0:00:10.214) 0:05:42.476 ****** 2026-01-03 00:56:48.142981 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.142986 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.142991 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.142996 | orchestrator | 2026-01-03 00:56:48.143001 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-03 00:56:48.143007 | orchestrator | Saturday 03 January 2026 00:51:16 +0000 (0:00:00.297) 0:05:42.773 ****** 2026-01-03 00:56:48.143012 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-03 00:56:48.143016 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-03 00:56:48.143019 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-03 00:56:48.143023 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.143026 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.143049 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.143053 | orchestrator | 2026-01-03 00:56:48.143056 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:56:48.143059 | orchestrator | Saturday 03 January 2026 00:51:18 +0000 (0:00:02.023) 0:05:44.797 ****** 2026-01-03 00:56:48.143068 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-03 00:56:48.143071 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-03 00:56:48.143074 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-03 00:56:48.143077 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:56:48.143080 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-03 00:56:48.143083 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-03 00:56:48.143086 | orchestrator | 2026-01-03 00:56:48.143089 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-03 00:56:48.143092 | orchestrator | Saturday 03 January 2026 00:51:20 +0000 (0:00:01.106) 0:05:45.904 ****** 2026-01-03 00:56:48.143095 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.143098 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.143102 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.143105 | orchestrator | 2026-01-03 00:56:48.143108 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-03 00:56:48.143111 | orchestrator | Saturday 03 January 2026 00:51:20 +0000 (0:00:00.863) 0:05:46.767 ****** 2026-01-03 00:56:48.143114 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.143117 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.143120 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143123 | orchestrator | 2026-01-03 00:56:48.143126 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-03 00:56:48.143129 | orchestrator | Saturday 03 January 2026 00:51:21 +0000 (0:00:00.401) 0:05:47.168 ****** 2026-01-03 00:56:48.143132 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143135 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.143138 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.143141 | orchestrator | 2026-01-03 00:56:48.143145 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-03 00:56:48.143148 | orchestrator | Saturday 03 January 2026 00:51:21 +0000 (0:00:00.226) 0:05:47.395 ****** 2026-01-03 00:56:48.143151 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.143154 | orchestrator | 2026-01-03 00:56:48.143157 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-03 00:56:48.143160 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:00.506) 0:05:47.902 ****** 2026-01-03 00:56:48.143163 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143166 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.143169 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.143172 | orchestrator | 2026-01-03 00:56:48.143175 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-03 00:56:48.143179 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:00.258) 0:05:48.161 ****** 2026-01-03 00:56:48.143182 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.143185 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.143188 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143191 | orchestrator | 2026-01-03 00:56:48.143194 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-03 00:56:48.143197 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:00.471) 0:05:48.633 ****** 2026-01-03 00:56:48.143202 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.143206 | orchestrator | 2026-01-03 00:56:48.143209 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-03 00:56:48.143212 | orchestrator | Saturday 03 January 2026 00:51:23 +0000 (0:00:00.587) 0:05:49.220 ****** 2026-01-03 00:56:48.143215 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143219 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143225 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143230 | orchestrator | 2026-01-03 00:56:48.143240 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-03 00:56:48.143246 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:01.138) 0:05:50.359 ****** 2026-01-03 00:56:48.143252 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143257 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143263 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143269 | orchestrator | 2026-01-03 00:56:48.143275 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-03 00:56:48.143280 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:01.191) 0:05:51.550 ****** 2026-01-03 00:56:48.143286 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143292 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143297 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143303 | orchestrator | 2026-01-03 00:56:48.143308 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-03 00:56:48.143314 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:02.350) 0:05:53.901 ****** 2026-01-03 00:56:48.143320 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143326 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143331 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143336 | orchestrator | 2026-01-03 00:56:48.143341 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-03 00:56:48.143347 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:01.967) 0:05:55.868 ****** 2026-01-03 00:56:48.143352 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143358 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.143362 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-03 00:56:48.143368 | orchestrator | 2026-01-03 00:56:48.143373 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-03 00:56:48.143378 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.546) 0:05:56.415 ****** 2026-01-03 00:56:48.143403 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-03 00:56:48.143410 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-03 00:56:48.143415 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-03 00:56:48.143420 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-03 00:56:48.143426 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-03 00:56:48.143431 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-03 00:56:48.143436 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.143441 | orchestrator | 2026-01-03 00:56:48.143447 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-03 00:56:48.143452 | orchestrator | Saturday 03 January 2026 00:52:06 +0000 (0:00:36.020) 0:06:32.435 ****** 2026-01-03 00:56:48.143457 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.143462 | orchestrator | 2026-01-03 00:56:48.143467 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-03 00:56:48.143472 | orchestrator | Saturday 03 January 2026 00:52:07 +0000 (0:00:01.150) 0:06:33.586 ****** 2026-01-03 00:56:48.143477 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.143482 | orchestrator | 2026-01-03 00:56:48.143486 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-03 00:56:48.143492 | orchestrator | Saturday 03 January 2026 00:52:07 +0000 (0:00:00.261) 0:06:33.847 ****** 2026-01-03 00:56:48.143497 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.143502 | orchestrator | 2026-01-03 00:56:48.143507 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-03 00:56:48.143518 | orchestrator | Saturday 03 January 2026 00:52:08 +0000 (0:00:00.131) 0:06:33.979 ****** 2026-01-03 00:56:48.143523 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-03 00:56:48.143528 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-03 00:56:48.143533 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-03 00:56:48.143537 | orchestrator | 2026-01-03 00:56:48.143543 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-03 00:56:48.143547 | orchestrator | Saturday 03 January 2026 00:52:14 +0000 (0:00:06.747) 0:06:40.726 ****** 2026-01-03 00:56:48.143552 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-03 00:56:48.143557 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-03 00:56:48.143562 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-03 00:56:48.143566 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-03 00:56:48.143571 | orchestrator | 2026-01-03 00:56:48.143576 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.143581 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:05.045) 0:06:45.772 ****** 2026-01-03 00:56:48.143589 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143594 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143599 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143604 | orchestrator | 2026-01-03 00:56:48.143609 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-03 00:56:48.143613 | orchestrator | Saturday 03 January 2026 00:52:20 +0000 (0:00:00.688) 0:06:46.461 ****** 2026-01-03 00:56:48.143618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.143622 | orchestrator | 2026-01-03 00:56:48.143627 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-03 00:56:48.143631 | orchestrator | Saturday 03 January 2026 00:52:21 +0000 (0:00:00.510) 0:06:46.971 ****** 2026-01-03 00:56:48.143636 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.143641 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.143646 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.143651 | orchestrator | 2026-01-03 00:56:48.143656 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-03 00:56:48.143661 | orchestrator | Saturday 03 January 2026 00:52:21 +0000 (0:00:00.446) 0:06:47.418 ****** 2026-01-03 00:56:48.143667 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.143671 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.143677 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.143682 | orchestrator | 2026-01-03 00:56:48.143686 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-03 00:56:48.143691 | orchestrator | Saturday 03 January 2026 00:52:22 +0000 (0:00:01.082) 0:06:48.500 ****** 2026-01-03 00:56:48.143696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:56:48.143701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:56:48.143705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:56:48.143711 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.143715 | orchestrator | 2026-01-03 00:56:48.143720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-03 00:56:48.143725 | orchestrator | Saturday 03 January 2026 00:52:23 +0000 (0:00:00.511) 0:06:49.012 ****** 2026-01-03 00:56:48.143730 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.143735 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.143740 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.143745 | orchestrator | 2026-01-03 00:56:48.143751 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-03 00:56:48.143756 | orchestrator | 2026-01-03 00:56:48.143761 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.143798 | orchestrator | Saturday 03 January 2026 00:52:23 +0000 (0:00:00.586) 0:06:49.598 ****** 2026-01-03 00:56:48.143804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.143810 | orchestrator | 2026-01-03 00:56:48.143814 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.143831 | orchestrator | Saturday 03 January 2026 00:52:24 +0000 (0:00:00.375) 0:06:49.974 ****** 2026-01-03 00:56:48.143836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.143841 | orchestrator | 2026-01-03 00:56:48.143845 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.143850 | orchestrator | Saturday 03 January 2026 00:52:24 +0000 (0:00:00.488) 0:06:50.462 ****** 2026-01-03 00:56:48.143854 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.143858 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.143863 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.143867 | orchestrator | 2026-01-03 00:56:48.143871 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.143876 | orchestrator | Saturday 03 January 2026 00:52:24 +0000 (0:00:00.245) 0:06:50.707 ****** 2026-01-03 00:56:48.143880 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.143885 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.143889 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.143892 | orchestrator | 2026-01-03 00:56:48.143895 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.143899 | orchestrator | Saturday 03 January 2026 00:52:25 +0000 (0:00:00.699) 0:06:51.407 ****** 2026-01-03 00:56:48.143902 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.143905 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.143908 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.143911 | orchestrator | 2026-01-03 00:56:48.143914 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.143917 | orchestrator | Saturday 03 January 2026 00:52:26 +0000 (0:00:00.697) 0:06:52.104 ****** 2026-01-03 00:56:48.143920 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.143923 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.143926 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.143929 | orchestrator | 2026-01-03 00:56:48.143932 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.143935 | orchestrator | Saturday 03 January 2026 00:52:27 +0000 (0:00:00.831) 0:06:52.936 ****** 2026-01-03 00:56:48.143938 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.143941 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.143944 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.143947 | orchestrator | 2026-01-03 00:56:48.143950 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.143954 | orchestrator | Saturday 03 January 2026 00:52:27 +0000 (0:00:00.296) 0:06:53.233 ****** 2026-01-03 00:56:48.143957 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.143960 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.143963 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.143966 | orchestrator | 2026-01-03 00:56:48.143969 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.143972 | orchestrator | Saturday 03 January 2026 00:52:27 +0000 (0:00:00.281) 0:06:53.514 ****** 2026-01-03 00:56:48.143975 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.143981 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.143984 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.143987 | orchestrator | 2026-01-03 00:56:48.143991 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.143994 | orchestrator | Saturday 03 January 2026 00:52:27 +0000 (0:00:00.284) 0:06:53.799 ****** 2026-01-03 00:56:48.144000 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144003 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144006 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144009 | orchestrator | 2026-01-03 00:56:48.144012 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.144015 | orchestrator | Saturday 03 January 2026 00:52:28 +0000 (0:00:00.802) 0:06:54.602 ****** 2026-01-03 00:56:48.144018 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144021 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144026 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144031 | orchestrator | 2026-01-03 00:56:48.144036 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.144041 | orchestrator | Saturday 03 January 2026 00:52:29 +0000 (0:00:00.630) 0:06:55.233 ****** 2026-01-03 00:56:48.144046 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144051 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144056 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144061 | orchestrator | 2026-01-03 00:56:48.144066 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.144071 | orchestrator | Saturday 03 January 2026 00:52:29 +0000 (0:00:00.278) 0:06:55.512 ****** 2026-01-03 00:56:48.144076 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144079 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144082 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144085 | orchestrator | 2026-01-03 00:56:48.144088 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.144091 | orchestrator | Saturday 03 January 2026 00:52:29 +0000 (0:00:00.270) 0:06:55.782 ****** 2026-01-03 00:56:48.144094 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144097 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144100 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144103 | orchestrator | 2026-01-03 00:56:48.144106 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.144110 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:00.497) 0:06:56.279 ****** 2026-01-03 00:56:48.144113 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144116 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144119 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144122 | orchestrator | 2026-01-03 00:56:48.144125 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.144131 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:00.334) 0:06:56.614 ****** 2026-01-03 00:56:48.144134 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144137 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144140 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144144 | orchestrator | 2026-01-03 00:56:48.144147 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.144150 | orchestrator | Saturday 03 January 2026 00:52:31 +0000 (0:00:00.322) 0:06:56.937 ****** 2026-01-03 00:56:48.144153 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144156 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144159 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144162 | orchestrator | 2026-01-03 00:56:48.144165 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.144168 | orchestrator | Saturday 03 January 2026 00:52:31 +0000 (0:00:00.311) 0:06:57.248 ****** 2026-01-03 00:56:48.144171 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144175 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144178 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144181 | orchestrator | 2026-01-03 00:56:48.144184 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.144187 | orchestrator | Saturday 03 January 2026 00:52:31 +0000 (0:00:00.572) 0:06:57.821 ****** 2026-01-03 00:56:48.144190 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144193 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144199 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144202 | orchestrator | 2026-01-03 00:56:48.144205 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.144208 | orchestrator | Saturday 03 January 2026 00:52:32 +0000 (0:00:00.299) 0:06:58.120 ****** 2026-01-03 00:56:48.144211 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144214 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144217 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144220 | orchestrator | 2026-01-03 00:56:48.144223 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.144227 | orchestrator | Saturday 03 January 2026 00:52:32 +0000 (0:00:00.323) 0:06:58.444 ****** 2026-01-03 00:56:48.144230 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144233 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144236 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144239 | orchestrator | 2026-01-03 00:56:48.144242 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-03 00:56:48.144245 | orchestrator | Saturday 03 January 2026 00:52:33 +0000 (0:00:00.562) 0:06:59.006 ****** 2026-01-03 00:56:48.144248 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144251 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144254 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144257 | orchestrator | 2026-01-03 00:56:48.144260 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:56:48.144263 | orchestrator | Saturday 03 January 2026 00:52:33 +0000 (0:00:00.524) 0:06:59.531 ****** 2026-01-03 00:56:48.144267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:56:48.144270 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:56:48.144273 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:56:48.144276 | orchestrator | 2026-01-03 00:56:48.144279 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-03 00:56:48.144282 | orchestrator | Saturday 03 January 2026 00:52:34 +0000 (0:00:00.553) 0:07:00.084 ****** 2026-01-03 00:56:48.144287 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.144291 | orchestrator | 2026-01-03 00:56:48.144294 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-03 00:56:48.144297 | orchestrator | Saturday 03 January 2026 00:52:34 +0000 (0:00:00.455) 0:07:00.540 ****** 2026-01-03 00:56:48.144300 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144303 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144306 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144309 | orchestrator | 2026-01-03 00:56:48.144312 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-03 00:56:48.144315 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:00.390) 0:07:00.931 ****** 2026-01-03 00:56:48.144319 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144322 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144325 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144328 | orchestrator | 2026-01-03 00:56:48.144331 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-03 00:56:48.144334 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:00.289) 0:07:01.220 ****** 2026-01-03 00:56:48.144337 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144340 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144343 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144346 | orchestrator | 2026-01-03 00:56:48.144349 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-03 00:56:48.144352 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:00.561) 0:07:01.782 ****** 2026-01-03 00:56:48.144355 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144359 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144364 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144367 | orchestrator | 2026-01-03 00:56:48.144370 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-03 00:56:48.144373 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:00.355) 0:07:02.137 ****** 2026-01-03 00:56:48.144376 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:56:48.144379 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:56:48.144382 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:56:48.144390 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:56:48.144444 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:56:48.144447 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:56:48.144451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:56:48.144454 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:56:48.144457 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:56:48.144460 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:56:48.144463 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:56:48.144466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:56:48.144469 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:56:48.144472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:56:48.144475 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:56:48.144478 | orchestrator | 2026-01-03 00:56:48.144481 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-03 00:56:48.144484 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:03.283) 0:07:05.421 ****** 2026-01-03 00:56:48.144487 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144491 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144494 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144497 | orchestrator | 2026-01-03 00:56:48.144500 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-03 00:56:48.144503 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:00.257) 0:07:05.679 ****** 2026-01-03 00:56:48.144506 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.144509 | orchestrator | 2026-01-03 00:56:48.144512 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-03 00:56:48.144515 | orchestrator | Saturday 03 January 2026 00:52:40 +0000 (0:00:00.463) 0:07:06.143 ****** 2026-01-03 00:56:48.144518 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:56:48.144522 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:56:48.144525 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:56:48.144528 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-03 00:56:48.144531 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-03 00:56:48.144534 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-03 00:56:48.144537 | orchestrator | 2026-01-03 00:56:48.144540 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-03 00:56:48.144543 | orchestrator | Saturday 03 January 2026 00:52:41 +0000 (0:00:01.059) 0:07:07.202 ****** 2026-01-03 00:56:48.144548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.144554 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.144557 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.144561 | orchestrator | 2026-01-03 00:56:48.144564 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:56:48.144567 | orchestrator | Saturday 03 January 2026 00:52:43 +0000 (0:00:01.956) 0:07:09.158 ****** 2026-01-03 00:56:48.144570 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:56:48.144573 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.144576 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.144579 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:56:48.144582 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:56:48.144585 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.144588 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:56:48.144591 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:56:48.144595 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.144598 | orchestrator | 2026-01-03 00:56:48.144601 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-03 00:56:48.144604 | orchestrator | Saturday 03 January 2026 00:52:44 +0000 (0:00:01.368) 0:07:10.527 ****** 2026-01-03 00:56:48.144607 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.144610 | orchestrator | 2026-01-03 00:56:48.144613 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-03 00:56:48.144616 | orchestrator | Saturday 03 January 2026 00:52:46 +0000 (0:00:02.283) 0:07:12.811 ****** 2026-01-03 00:56:48.144619 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.144622 | orchestrator | 2026-01-03 00:56:48.144625 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-03 00:56:48.144628 | orchestrator | Saturday 03 January 2026 00:52:47 +0000 (0:00:00.496) 0:07:13.308 ****** 2026-01-03 00:56:48.144631 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c38584cd-f033-5ed2-9691-83456ad614b7', 'data_vg': 'ceph-c38584cd-f033-5ed2-9691-83456ad614b7'}) 2026-01-03 00:56:48.144635 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0772612-0fc2-543a-b7cc-c9fc1cdd665f', 'data_vg': 'ceph-c0772612-0fc2-543a-b7cc-c9fc1cdd665f'}) 2026-01-03 00:56:48.144642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85e74b82-cd6e-500e-9461-b867f1cfbb6a', 'data_vg': 'ceph-85e74b82-cd6e-500e-9461-b867f1cfbb6a'}) 2026-01-03 00:56:48.144645 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898', 'data_vg': 'ceph-d5e4cbc2-7f45-5eff-bf2d-d06fd7ec5898'}) 2026-01-03 00:56:48.144648 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45670551-be8c-5463-bb13-3841732d7282', 'data_vg': 'ceph-45670551-be8c-5463-bb13-3841732d7282'}) 2026-01-03 00:56:48.144651 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1ae59360-fa3d-59bd-b3b8-51590acdfd6e', 'data_vg': 'ceph-1ae59360-fa3d-59bd-b3b8-51590acdfd6e'}) 2026-01-03 00:56:48.144654 | orchestrator | 2026-01-03 00:56:48.144657 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-03 00:56:48.144660 | orchestrator | Saturday 03 January 2026 00:53:28 +0000 (0:00:41.398) 0:07:54.706 ****** 2026-01-03 00:56:48.144663 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144667 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144670 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144673 | orchestrator | 2026-01-03 00:56:48.144676 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-03 00:56:48.144679 | orchestrator | Saturday 03 January 2026 00:53:29 +0000 (0:00:00.294) 0:07:55.001 ****** 2026-01-03 00:56:48.144682 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.144687 | orchestrator | 2026-01-03 00:56:48.144690 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-03 00:56:48.144693 | orchestrator | Saturday 03 January 2026 00:53:29 +0000 (0:00:00.470) 0:07:55.471 ****** 2026-01-03 00:56:48.144696 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144699 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144702 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144705 | orchestrator | 2026-01-03 00:56:48.144708 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-03 00:56:48.144711 | orchestrator | Saturday 03 January 2026 00:53:30 +0000 (0:00:00.896) 0:07:56.368 ****** 2026-01-03 00:56:48.144714 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.144717 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.144720 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.144723 | orchestrator | 2026-01-03 00:56:48.144726 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-03 00:56:48.144729 | orchestrator | Saturday 03 January 2026 00:53:33 +0000 (0:00:02.622) 0:07:58.991 ****** 2026-01-03 00:56:48.144732 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.144735 | orchestrator | 2026-01-03 00:56:48.144739 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-03 00:56:48.144742 | orchestrator | Saturday 03 January 2026 00:53:33 +0000 (0:00:00.490) 0:07:59.481 ****** 2026-01-03 00:56:48.144745 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.144748 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.144751 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.144754 | orchestrator | 2026-01-03 00:56:48.144759 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-03 00:56:48.144762 | orchestrator | Saturday 03 January 2026 00:53:35 +0000 (0:00:01.444) 0:08:00.926 ****** 2026-01-03 00:56:48.144766 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.144769 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.144772 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.144775 | orchestrator | 2026-01-03 00:56:48.144778 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-03 00:56:48.144781 | orchestrator | Saturday 03 January 2026 00:53:36 +0000 (0:00:01.137) 0:08:02.063 ****** 2026-01-03 00:56:48.144784 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.144787 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.144790 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.144793 | orchestrator | 2026-01-03 00:56:48.144796 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-03 00:56:48.144799 | orchestrator | Saturday 03 January 2026 00:53:37 +0000 (0:00:01.805) 0:08:03.869 ****** 2026-01-03 00:56:48.144802 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144805 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144808 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144811 | orchestrator | 2026-01-03 00:56:48.144814 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-03 00:56:48.144817 | orchestrator | Saturday 03 January 2026 00:53:38 +0000 (0:00:00.344) 0:08:04.213 ****** 2026-01-03 00:56:48.144832 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144837 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144841 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.144847 | orchestrator | 2026-01-03 00:56:48.144852 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-03 00:56:48.144857 | orchestrator | Saturday 03 January 2026 00:53:38 +0000 (0:00:00.526) 0:08:04.739 ****** 2026-01-03 00:56:48.144862 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:56:48.144867 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-03 00:56:48.144871 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-03 00:56:48.144878 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-03 00:56:48.144881 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-03 00:56:48.144884 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-03 00:56:48.144887 | orchestrator | 2026-01-03 00:56:48.144890 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-03 00:56:48.144893 | orchestrator | Saturday 03 January 2026 00:53:39 +0000 (0:00:00.974) 0:08:05.714 ****** 2026-01-03 00:56:48.144896 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-03 00:56:48.144899 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-03 00:56:48.144905 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-03 00:56:48.144908 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-03 00:56:48.144911 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-03 00:56:48.144914 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-03 00:56:48.144917 | orchestrator | 2026-01-03 00:56:48.144920 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-03 00:56:48.144923 | orchestrator | Saturday 03 January 2026 00:53:42 +0000 (0:00:02.238) 0:08:07.952 ****** 2026-01-03 00:56:48.144926 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-03 00:56:48.144929 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-03 00:56:48.144932 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-03 00:56:48.144936 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-03 00:56:48.144939 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-03 00:56:48.144942 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-03 00:56:48.144945 | orchestrator | 2026-01-03 00:56:48.144948 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-03 00:56:48.144951 | orchestrator | Saturday 03 January 2026 00:53:46 +0000 (0:00:04.081) 0:08:12.034 ****** 2026-01-03 00:56:48.144954 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144957 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.144963 | orchestrator | 2026-01-03 00:56:48.144966 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-03 00:56:48.144969 | orchestrator | Saturday 03 January 2026 00:53:49 +0000 (0:00:03.580) 0:08:15.615 ****** 2026-01-03 00:56:48.144972 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.144975 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.144978 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-03 00:56:48.144982 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.144985 | orchestrator | 2026-01-03 00:56:48.144988 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-03 00:56:48.144991 | orchestrator | Saturday 03 January 2026 00:54:02 +0000 (0:00:12.456) 0:08:28.071 ****** 2026-01-03 00:56:48.144994 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145014 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145018 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145021 | orchestrator | 2026-01-03 00:56:48.145026 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.145032 | orchestrator | Saturday 03 January 2026 00:54:03 +0000 (0:00:00.978) 0:08:29.050 ****** 2026-01-03 00:56:48.145037 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145042 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145047 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145053 | orchestrator | 2026-01-03 00:56:48.145059 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-03 00:56:48.145064 | orchestrator | Saturday 03 January 2026 00:54:03 +0000 (0:00:00.351) 0:08:29.401 ****** 2026-01-03 00:56:48.145069 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.145072 | orchestrator | 2026-01-03 00:56:48.145075 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-03 00:56:48.145083 | orchestrator | Saturday 03 January 2026 00:54:04 +0000 (0:00:00.522) 0:08:29.923 ****** 2026-01-03 00:56:48.145086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.145089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.145092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.145096 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145099 | orchestrator | 2026-01-03 00:56:48.145102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-03 00:56:48.145105 | orchestrator | Saturday 03 January 2026 00:54:04 +0000 (0:00:00.794) 0:08:30.718 ****** 2026-01-03 00:56:48.145108 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145111 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145114 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145117 | orchestrator | 2026-01-03 00:56:48.145120 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-03 00:56:48.145123 | orchestrator | Saturday 03 January 2026 00:54:05 +0000 (0:00:00.300) 0:08:31.018 ****** 2026-01-03 00:56:48.145126 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145129 | orchestrator | 2026-01-03 00:56:48.145132 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-03 00:56:48.145135 | orchestrator | Saturday 03 January 2026 00:54:05 +0000 (0:00:00.203) 0:08:31.221 ****** 2026-01-03 00:56:48.145138 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145142 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145145 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145148 | orchestrator | 2026-01-03 00:56:48.145151 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-03 00:56:48.145154 | orchestrator | Saturday 03 January 2026 00:54:05 +0000 (0:00:00.320) 0:08:31.542 ****** 2026-01-03 00:56:48.145157 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145160 | orchestrator | 2026-01-03 00:56:48.145163 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-03 00:56:48.145166 | orchestrator | Saturday 03 January 2026 00:54:05 +0000 (0:00:00.229) 0:08:31.771 ****** 2026-01-03 00:56:48.145169 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145172 | orchestrator | 2026-01-03 00:56:48.145175 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-03 00:56:48.145178 | orchestrator | Saturday 03 January 2026 00:54:06 +0000 (0:00:00.222) 0:08:31.994 ****** 2026-01-03 00:56:48.145181 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145184 | orchestrator | 2026-01-03 00:56:48.145187 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-03 00:56:48.145191 | orchestrator | Saturday 03 January 2026 00:54:06 +0000 (0:00:00.124) 0:08:32.119 ****** 2026-01-03 00:56:48.145197 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145200 | orchestrator | 2026-01-03 00:56:48.145203 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-03 00:56:48.145206 | orchestrator | Saturday 03 January 2026 00:54:06 +0000 (0:00:00.213) 0:08:32.333 ****** 2026-01-03 00:56:48.145209 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145212 | orchestrator | 2026-01-03 00:56:48.145215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-03 00:56:48.145218 | orchestrator | Saturday 03 January 2026 00:54:07 +0000 (0:00:00.662) 0:08:32.995 ****** 2026-01-03 00:56:48.145221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.145224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.145228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.145231 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145234 | orchestrator | 2026-01-03 00:56:48.145237 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-03 00:56:48.145244 | orchestrator | Saturday 03 January 2026 00:54:07 +0000 (0:00:00.417) 0:08:33.413 ****** 2026-01-03 00:56:48.145247 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145250 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145253 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145256 | orchestrator | 2026-01-03 00:56:48.145259 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-03 00:56:48.145262 | orchestrator | Saturday 03 January 2026 00:54:07 +0000 (0:00:00.315) 0:08:33.729 ****** 2026-01-03 00:56:48.145265 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145268 | orchestrator | 2026-01-03 00:56:48.145271 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-03 00:56:48.145275 | orchestrator | Saturday 03 January 2026 00:54:08 +0000 (0:00:00.242) 0:08:33.972 ****** 2026-01-03 00:56:48.145278 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145281 | orchestrator | 2026-01-03 00:56:48.145284 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-03 00:56:48.145287 | orchestrator | 2026-01-03 00:56:48.145290 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.145293 | orchestrator | Saturday 03 January 2026 00:54:08 +0000 (0:00:00.634) 0:08:34.606 ****** 2026-01-03 00:56:48.145296 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.145301 | orchestrator | 2026-01-03 00:56:48.145304 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.145307 | orchestrator | Saturday 03 January 2026 00:54:09 +0000 (0:00:01.188) 0:08:35.795 ****** 2026-01-03 00:56:48.145310 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.145314 | orchestrator | 2026-01-03 00:56:48.145317 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.145320 | orchestrator | Saturday 03 January 2026 00:54:11 +0000 (0:00:01.228) 0:08:37.023 ****** 2026-01-03 00:56:48.145323 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145326 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145331 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145334 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145337 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145340 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145343 | orchestrator | 2026-01-03 00:56:48.145346 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.145349 | orchestrator | Saturday 03 January 2026 00:54:12 +0000 (0:00:01.262) 0:08:38.286 ****** 2026-01-03 00:56:48.145352 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145355 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145358 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145361 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145365 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145368 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145371 | orchestrator | 2026-01-03 00:56:48.145374 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.145377 | orchestrator | Saturday 03 January 2026 00:54:13 +0000 (0:00:00.714) 0:08:39.001 ****** 2026-01-03 00:56:48.145380 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145383 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145386 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145389 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145392 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145395 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145398 | orchestrator | 2026-01-03 00:56:48.145401 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.145404 | orchestrator | Saturday 03 January 2026 00:54:14 +0000 (0:00:01.054) 0:08:40.055 ****** 2026-01-03 00:56:48.145409 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145413 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145416 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145419 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145422 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145425 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145428 | orchestrator | 2026-01-03 00:56:48.145431 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.145434 | orchestrator | Saturday 03 January 2026 00:54:14 +0000 (0:00:00.748) 0:08:40.804 ****** 2026-01-03 00:56:48.145437 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145440 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145443 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145446 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145449 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145452 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145455 | orchestrator | 2026-01-03 00:56:48.145458 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.145464 | orchestrator | Saturday 03 January 2026 00:54:16 +0000 (0:00:01.223) 0:08:42.028 ****** 2026-01-03 00:56:48.145467 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145470 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145473 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145476 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145479 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145483 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145486 | orchestrator | 2026-01-03 00:56:48.145489 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.145492 | orchestrator | Saturday 03 January 2026 00:54:16 +0000 (0:00:00.570) 0:08:42.598 ****** 2026-01-03 00:56:48.145495 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145498 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145501 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145504 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145507 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145510 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145513 | orchestrator | 2026-01-03 00:56:48.145516 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.145520 | orchestrator | Saturday 03 January 2026 00:54:17 +0000 (0:00:00.879) 0:08:43.477 ****** 2026-01-03 00:56:48.145523 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145526 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145529 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145532 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145535 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145538 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145541 | orchestrator | 2026-01-03 00:56:48.145544 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.145547 | orchestrator | Saturday 03 January 2026 00:54:18 +0000 (0:00:01.042) 0:08:44.519 ****** 2026-01-03 00:56:48.145550 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145553 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145556 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145559 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145562 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145565 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145569 | orchestrator | 2026-01-03 00:56:48.145572 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.145575 | orchestrator | Saturday 03 January 2026 00:54:19 +0000 (0:00:01.308) 0:08:45.828 ****** 2026-01-03 00:56:48.145578 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145581 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145584 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145587 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145592 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145596 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145599 | orchestrator | 2026-01-03 00:56:48.145602 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.145605 | orchestrator | Saturday 03 January 2026 00:54:20 +0000 (0:00:00.597) 0:08:46.426 ****** 2026-01-03 00:56:48.145608 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145611 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145614 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145617 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145620 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145623 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145626 | orchestrator | 2026-01-03 00:56:48.145629 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.145633 | orchestrator | Saturday 03 January 2026 00:54:21 +0000 (0:00:00.899) 0:08:47.326 ****** 2026-01-03 00:56:48.145636 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145639 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145643 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145647 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145650 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145653 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145656 | orchestrator | 2026-01-03 00:56:48.145659 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.145662 | orchestrator | Saturday 03 January 2026 00:54:22 +0000 (0:00:00.649) 0:08:47.975 ****** 2026-01-03 00:56:48.145665 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145668 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145671 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145674 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145677 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145680 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145683 | orchestrator | 2026-01-03 00:56:48.145687 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.145690 | orchestrator | Saturday 03 January 2026 00:54:22 +0000 (0:00:00.823) 0:08:48.798 ****** 2026-01-03 00:56:48.145693 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145696 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145699 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145702 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145705 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145708 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145711 | orchestrator | 2026-01-03 00:56:48.145714 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.145717 | orchestrator | Saturday 03 January 2026 00:54:23 +0000 (0:00:00.624) 0:08:49.423 ****** 2026-01-03 00:56:48.145720 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145723 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145727 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145730 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145733 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145736 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145739 | orchestrator | 2026-01-03 00:56:48.145742 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.145745 | orchestrator | Saturday 03 January 2026 00:54:24 +0000 (0:00:00.774) 0:08:50.197 ****** 2026-01-03 00:56:48.145748 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145751 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145754 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145757 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:56:48.145760 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:56:48.145763 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:56:48.145766 | orchestrator | 2026-01-03 00:56:48.145770 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.145776 | orchestrator | Saturday 03 January 2026 00:54:24 +0000 (0:00:00.562) 0:08:50.760 ****** 2026-01-03 00:56:48.145780 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.145783 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.145786 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.145789 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145792 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145795 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145798 | orchestrator | 2026-01-03 00:56:48.145801 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.145804 | orchestrator | Saturday 03 January 2026 00:54:25 +0000 (0:00:00.955) 0:08:51.715 ****** 2026-01-03 00:56:48.145807 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145810 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145813 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145816 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145841 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145844 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145847 | orchestrator | 2026-01-03 00:56:48.145850 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.145853 | orchestrator | Saturday 03 January 2026 00:54:26 +0000 (0:00:00.710) 0:08:52.426 ****** 2026-01-03 00:56:48.145857 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.145860 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.145863 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.145866 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145869 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.145872 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.145875 | orchestrator | 2026-01-03 00:56:48.145878 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-03 00:56:48.145881 | orchestrator | Saturday 03 January 2026 00:54:27 +0000 (0:00:01.344) 0:08:53.770 ****** 2026-01-03 00:56:48.145884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.145887 | orchestrator | 2026-01-03 00:56:48.145890 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-03 00:56:48.145893 | orchestrator | Saturday 03 January 2026 00:54:31 +0000 (0:00:04.119) 0:08:57.890 ****** 2026-01-03 00:56:48.145896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.145899 | orchestrator | 2026-01-03 00:56:48.145902 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-03 00:56:48.145905 | orchestrator | Saturday 03 January 2026 00:54:34 +0000 (0:00:02.328) 0:09:00.219 ****** 2026-01-03 00:56:48.145908 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.145911 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.145914 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.145917 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.145920 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.145923 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.145927 | orchestrator | 2026-01-03 00:56:48.145930 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-03 00:56:48.145933 | orchestrator | Saturday 03 January 2026 00:54:36 +0000 (0:00:01.880) 0:09:02.100 ****** 2026-01-03 00:56:48.145936 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.145939 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.145942 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.145945 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.145948 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.145951 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.145954 | orchestrator | 2026-01-03 00:56:48.145957 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-03 00:56:48.145962 | orchestrator | Saturday 03 January 2026 00:54:37 +0000 (0:00:00.984) 0:09:03.084 ****** 2026-01-03 00:56:48.145965 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.145971 | orchestrator | 2026-01-03 00:56:48.145974 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-03 00:56:48.145978 | orchestrator | Saturday 03 January 2026 00:54:38 +0000 (0:00:01.102) 0:09:04.187 ****** 2026-01-03 00:56:48.145981 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.145984 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.145987 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.145990 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.145993 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.145996 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.145999 | orchestrator | 2026-01-03 00:56:48.146002 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-03 00:56:48.146005 | orchestrator | Saturday 03 January 2026 00:54:40 +0000 (0:00:01.869) 0:09:06.057 ****** 2026-01-03 00:56:48.146008 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146011 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146050 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146056 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.146061 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.146066 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.146072 | orchestrator | 2026-01-03 00:56:48.146075 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-03 00:56:48.146078 | orchestrator | Saturday 03 January 2026 00:54:43 +0000 (0:00:03.451) 0:09:09.509 ****** 2026-01-03 00:56:48.146082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:56:48.146085 | orchestrator | 2026-01-03 00:56:48.146088 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-03 00:56:48.146091 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:01.040) 0:09:10.549 ****** 2026-01-03 00:56:48.146094 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146097 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146100 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146103 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.146106 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.146109 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.146112 | orchestrator | 2026-01-03 00:56:48.146115 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-03 00:56:48.146122 | orchestrator | Saturday 03 January 2026 00:54:45 +0000 (0:00:00.695) 0:09:11.244 ****** 2026-01-03 00:56:48.146125 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146128 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146131 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:56:48.146134 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:56:48.146137 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:56:48.146140 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146143 | orchestrator | 2026-01-03 00:56:48.146146 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-03 00:56:48.146149 | orchestrator | Saturday 03 January 2026 00:54:48 +0000 (0:00:02.975) 0:09:14.220 ****** 2026-01-03 00:56:48.146152 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146155 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146158 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146161 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:56:48.146164 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:56:48.146167 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:56:48.146170 | orchestrator | 2026-01-03 00:56:48.146173 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-03 00:56:48.146177 | orchestrator | 2026-01-03 00:56:48.146180 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.146183 | orchestrator | Saturday 03 January 2026 00:54:49 +0000 (0:00:00.933) 0:09:15.154 ****** 2026-01-03 00:56:48.146189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.146192 | orchestrator | 2026-01-03 00:56:48.146195 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.146198 | orchestrator | Saturday 03 January 2026 00:54:49 +0000 (0:00:00.361) 0:09:15.515 ****** 2026-01-03 00:56:48.146201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.146204 | orchestrator | 2026-01-03 00:56:48.146207 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.146211 | orchestrator | Saturday 03 January 2026 00:54:50 +0000 (0:00:00.491) 0:09:16.006 ****** 2026-01-03 00:56:48.146216 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146220 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146228 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146238 | orchestrator | 2026-01-03 00:56:48.146242 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.146246 | orchestrator | Saturday 03 January 2026 00:54:50 +0000 (0:00:00.221) 0:09:16.228 ****** 2026-01-03 00:56:48.146251 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146255 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146260 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146265 | orchestrator | 2026-01-03 00:56:48.146270 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.146275 | orchestrator | Saturday 03 January 2026 00:54:51 +0000 (0:00:00.673) 0:09:16.901 ****** 2026-01-03 00:56:48.146279 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146284 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146288 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146292 | orchestrator | 2026-01-03 00:56:48.146297 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.146302 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:01.211) 0:09:18.112 ****** 2026-01-03 00:56:48.146306 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146314 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146319 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146324 | orchestrator | 2026-01-03 00:56:48.146330 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.146336 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:00.728) 0:09:18.841 ****** 2026-01-03 00:56:48.146339 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146342 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146345 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146348 | orchestrator | 2026-01-03 00:56:48.146352 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.146355 | orchestrator | Saturday 03 January 2026 00:54:53 +0000 (0:00:00.355) 0:09:19.196 ****** 2026-01-03 00:56:48.146358 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146361 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146364 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146367 | orchestrator | 2026-01-03 00:56:48.146370 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.146373 | orchestrator | Saturday 03 January 2026 00:54:53 +0000 (0:00:00.317) 0:09:19.514 ****** 2026-01-03 00:56:48.146376 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146379 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146382 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146385 | orchestrator | 2026-01-03 00:56:48.146389 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.146392 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:00.488) 0:09:20.002 ****** 2026-01-03 00:56:48.146395 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146398 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146401 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146409 | orchestrator | 2026-01-03 00:56:48.146412 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.146415 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:00.673) 0:09:20.676 ****** 2026-01-03 00:56:48.146418 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146422 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146425 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146428 | orchestrator | 2026-01-03 00:56:48.146431 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.146434 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:00.683) 0:09:21.359 ****** 2026-01-03 00:56:48.146437 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146440 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146443 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146447 | orchestrator | 2026-01-03 00:56:48.146450 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.146456 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:00.269) 0:09:21.629 ****** 2026-01-03 00:56:48.146459 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146462 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146465 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146468 | orchestrator | 2026-01-03 00:56:48.146471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.146475 | orchestrator | Saturday 03 January 2026 00:54:56 +0000 (0:00:00.531) 0:09:22.160 ****** 2026-01-03 00:56:48.146478 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146481 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146484 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146487 | orchestrator | 2026-01-03 00:56:48.146490 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.146493 | orchestrator | Saturday 03 January 2026 00:54:56 +0000 (0:00:00.540) 0:09:22.701 ****** 2026-01-03 00:56:48.146496 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146499 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146502 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146505 | orchestrator | 2026-01-03 00:56:48.146508 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.146512 | orchestrator | Saturday 03 January 2026 00:54:57 +0000 (0:00:00.635) 0:09:23.336 ****** 2026-01-03 00:56:48.146515 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146518 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146521 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146524 | orchestrator | 2026-01-03 00:56:48.146527 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.146530 | orchestrator | Saturday 03 January 2026 00:54:57 +0000 (0:00:00.426) 0:09:23.763 ****** 2026-01-03 00:56:48.146533 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146536 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146539 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146542 | orchestrator | 2026-01-03 00:56:48.146545 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.146549 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:00.604) 0:09:24.367 ****** 2026-01-03 00:56:48.146552 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146555 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146558 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146561 | orchestrator | 2026-01-03 00:56:48.146564 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.146567 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:00.341) 0:09:24.708 ****** 2026-01-03 00:56:48.146571 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146574 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146577 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146580 | orchestrator | 2026-01-03 00:56:48.146583 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.146589 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.369) 0:09:25.077 ****** 2026-01-03 00:56:48.146592 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146595 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146598 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146601 | orchestrator | 2026-01-03 00:56:48.146604 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.146607 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.364) 0:09:25.442 ****** 2026-01-03 00:56:48.146610 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146613 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146617 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146620 | orchestrator | 2026-01-03 00:56:48.146624 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-03 00:56:48.146628 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.695) 0:09:26.137 ****** 2026-01-03 00:56:48.146631 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146634 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146637 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-03 00:56:48.146640 | orchestrator | 2026-01-03 00:56:48.146643 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-03 00:56:48.146646 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.413) 0:09:26.550 ****** 2026-01-03 00:56:48.146649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.146652 | orchestrator | 2026-01-03 00:56:48.146655 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-03 00:56:48.146659 | orchestrator | Saturday 03 January 2026 00:55:02 +0000 (0:00:02.182) 0:09:28.733 ****** 2026-01-03 00:56:48.146663 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-03 00:56:48.146667 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146670 | orchestrator | 2026-01-03 00:56:48.146674 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-03 00:56:48.146677 | orchestrator | Saturday 03 January 2026 00:55:03 +0000 (0:00:00.248) 0:09:28.982 ****** 2026-01-03 00:56:48.146681 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:56:48.146685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:56:48.146689 | orchestrator | 2026-01-03 00:56:48.146694 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-03 00:56:48.146697 | orchestrator | Saturday 03 January 2026 00:55:12 +0000 (0:00:09.036) 0:09:38.019 ****** 2026-01-03 00:56:48.146700 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:56:48.146703 | orchestrator | 2026-01-03 00:56:48.146707 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-03 00:56:48.146710 | orchestrator | Saturday 03 January 2026 00:55:15 +0000 (0:00:03.637) 0:09:41.656 ****** 2026-01-03 00:56:48.146713 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.146716 | orchestrator | 2026-01-03 00:56:48.146719 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-03 00:56:48.146722 | orchestrator | Saturday 03 January 2026 00:55:16 +0000 (0:00:00.495) 0:09:42.152 ****** 2026-01-03 00:56:48.146727 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:56:48.146730 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:56:48.146733 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:56:48.146736 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-03 00:56:48.146739 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-03 00:56:48.146743 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-03 00:56:48.146746 | orchestrator | 2026-01-03 00:56:48.146749 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-03 00:56:48.146752 | orchestrator | Saturday 03 January 2026 00:55:17 +0000 (0:00:01.079) 0:09:43.231 ****** 2026-01-03 00:56:48.146755 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.146758 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.146761 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.146764 | orchestrator | 2026-01-03 00:56:48.146767 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:56:48.146770 | orchestrator | Saturday 03 January 2026 00:55:19 +0000 (0:00:02.372) 0:09:45.603 ****** 2026-01-03 00:56:48.146773 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:56:48.146777 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.146780 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146783 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:56:48.146786 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:56:48.146789 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:56:48.146792 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146795 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:56:48.146798 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146801 | orchestrator | 2026-01-03 00:56:48.146804 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-03 00:56:48.146808 | orchestrator | Saturday 03 January 2026 00:55:21 +0000 (0:00:01.496) 0:09:47.100 ****** 2026-01-03 00:56:48.146811 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146814 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146817 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146831 | orchestrator | 2026-01-03 00:56:48.146836 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-03 00:56:48.146839 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:02.855) 0:09:49.956 ****** 2026-01-03 00:56:48.146843 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.146846 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.146849 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.146852 | orchestrator | 2026-01-03 00:56:48.146855 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-03 00:56:48.146858 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:00.297) 0:09:50.254 ****** 2026-01-03 00:56:48.146861 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.146864 | orchestrator | 2026-01-03 00:56:48.146867 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-03 00:56:48.146870 | orchestrator | Saturday 03 January 2026 00:55:25 +0000 (0:00:00.739) 0:09:50.994 ****** 2026-01-03 00:56:48.146873 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.146876 | orchestrator | 2026-01-03 00:56:48.146879 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-03 00:56:48.146883 | orchestrator | Saturday 03 January 2026 00:55:25 +0000 (0:00:00.515) 0:09:51.509 ****** 2026-01-03 00:56:48.146886 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146892 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146895 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146898 | orchestrator | 2026-01-03 00:56:48.146901 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-03 00:56:48.146904 | orchestrator | Saturday 03 January 2026 00:55:26 +0000 (0:00:01.297) 0:09:52.807 ****** 2026-01-03 00:56:48.146907 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146910 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146913 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146916 | orchestrator | 2026-01-03 00:56:48.146920 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-03 00:56:48.146923 | orchestrator | Saturday 03 January 2026 00:55:28 +0000 (0:00:01.505) 0:09:54.313 ****** 2026-01-03 00:56:48.146926 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146929 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146932 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146935 | orchestrator | 2026-01-03 00:56:48.146938 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-03 00:56:48.146943 | orchestrator | Saturday 03 January 2026 00:55:30 +0000 (0:00:02.068) 0:09:56.381 ****** 2026-01-03 00:56:48.146947 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146950 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146953 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146956 | orchestrator | 2026-01-03 00:56:48.146959 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-03 00:56:48.146962 | orchestrator | Saturday 03 January 2026 00:55:32 +0000 (0:00:02.036) 0:09:58.417 ****** 2026-01-03 00:56:48.146965 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.146970 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.146973 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.146976 | orchestrator | 2026-01-03 00:56:48.146980 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.146983 | orchestrator | Saturday 03 January 2026 00:55:33 +0000 (0:00:01.350) 0:09:59.768 ****** 2026-01-03 00:56:48.146986 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.146989 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.146992 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.146995 | orchestrator | 2026-01-03 00:56:48.146998 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-03 00:56:48.147001 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.631) 0:10:00.400 ****** 2026-01-03 00:56:48.147004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.147007 | orchestrator | 2026-01-03 00:56:48.147010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-03 00:56:48.147013 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:00.679) 0:10:01.080 ****** 2026-01-03 00:56:48.147016 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147020 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147023 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147028 | orchestrator | 2026-01-03 00:56:48.147033 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-03 00:56:48.147041 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:00.330) 0:10:01.410 ****** 2026-01-03 00:56:48.147048 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.147053 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.147057 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.147063 | orchestrator | 2026-01-03 00:56:48.147068 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-03 00:56:48.147073 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:01.254) 0:10:02.665 ****** 2026-01-03 00:56:48.147078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.147083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.147091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.147094 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147097 | orchestrator | 2026-01-03 00:56:48.147101 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-03 00:56:48.147104 | orchestrator | Saturday 03 January 2026 00:55:37 +0000 (0:00:00.830) 0:10:03.495 ****** 2026-01-03 00:56:48.147107 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147110 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147113 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147116 | orchestrator | 2026-01-03 00:56:48.147119 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-03 00:56:48.147122 | orchestrator | 2026-01-03 00:56:48.147125 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:56:48.147134 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:00.804) 0:10:04.300 ****** 2026-01-03 00:56:48.147137 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.147140 | orchestrator | 2026-01-03 00:56:48.147144 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:56:48.147147 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:00.481) 0:10:04.782 ****** 2026-01-03 00:56:48.147150 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.147153 | orchestrator | 2026-01-03 00:56:48.147156 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:56:48.147159 | orchestrator | Saturday 03 January 2026 00:55:39 +0000 (0:00:00.692) 0:10:05.474 ****** 2026-01-03 00:56:48.147162 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147165 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147168 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147171 | orchestrator | 2026-01-03 00:56:48.147174 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:56:48.147178 | orchestrator | Saturday 03 January 2026 00:55:39 +0000 (0:00:00.303) 0:10:05.778 ****** 2026-01-03 00:56:48.147181 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147184 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147187 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147190 | orchestrator | 2026-01-03 00:56:48.147193 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:56:48.147196 | orchestrator | Saturday 03 January 2026 00:55:40 +0000 (0:00:00.750) 0:10:06.528 ****** 2026-01-03 00:56:48.147199 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147202 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147205 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147208 | orchestrator | 2026-01-03 00:56:48.147211 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:56:48.147214 | orchestrator | Saturday 03 January 2026 00:55:41 +0000 (0:00:00.885) 0:10:07.413 ****** 2026-01-03 00:56:48.147218 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147221 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147224 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147227 | orchestrator | 2026-01-03 00:56:48.147230 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:56:48.147233 | orchestrator | Saturday 03 January 2026 00:55:42 +0000 (0:00:01.253) 0:10:08.667 ****** 2026-01-03 00:56:48.147236 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147242 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147245 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147249 | orchestrator | 2026-01-03 00:56:48.147252 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:56:48.147255 | orchestrator | Saturday 03 January 2026 00:55:43 +0000 (0:00:00.252) 0:10:08.919 ****** 2026-01-03 00:56:48.147258 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147264 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147267 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147271 | orchestrator | 2026-01-03 00:56:48.147274 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:56:48.147277 | orchestrator | Saturday 03 January 2026 00:55:43 +0000 (0:00:00.265) 0:10:09.185 ****** 2026-01-03 00:56:48.147280 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147283 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147286 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147289 | orchestrator | 2026-01-03 00:56:48.147292 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:56:48.147295 | orchestrator | Saturday 03 January 2026 00:55:43 +0000 (0:00:00.265) 0:10:09.450 ****** 2026-01-03 00:56:48.147298 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147301 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147304 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147307 | orchestrator | 2026-01-03 00:56:48.147311 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:56:48.147314 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:00.945) 0:10:10.396 ****** 2026-01-03 00:56:48.147317 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147320 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147323 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147326 | orchestrator | 2026-01-03 00:56:48.147329 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:56:48.147332 | orchestrator | Saturday 03 January 2026 00:55:45 +0000 (0:00:00.761) 0:10:11.158 ****** 2026-01-03 00:56:48.147335 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147338 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147341 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147344 | orchestrator | 2026-01-03 00:56:48.147348 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:56:48.147351 | orchestrator | Saturday 03 January 2026 00:55:45 +0000 (0:00:00.242) 0:10:11.400 ****** 2026-01-03 00:56:48.147354 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147357 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147360 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147363 | orchestrator | 2026-01-03 00:56:48.147366 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:56:48.147369 | orchestrator | Saturday 03 January 2026 00:55:45 +0000 (0:00:00.236) 0:10:11.637 ****** 2026-01-03 00:56:48.147372 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147375 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147378 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147381 | orchestrator | 2026-01-03 00:56:48.147384 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:56:48.147388 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:00.442) 0:10:12.079 ****** 2026-01-03 00:56:48.147391 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147394 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147397 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147400 | orchestrator | 2026-01-03 00:56:48.147403 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:56:48.147408 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:00.266) 0:10:12.346 ****** 2026-01-03 00:56:48.147411 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147414 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147417 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147420 | orchestrator | 2026-01-03 00:56:48.147423 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:56:48.147426 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:00.284) 0:10:12.630 ****** 2026-01-03 00:56:48.147429 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147432 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147435 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147440 | orchestrator | 2026-01-03 00:56:48.147443 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:56:48.147446 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.271) 0:10:12.902 ****** 2026-01-03 00:56:48.147450 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147453 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147456 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147459 | orchestrator | 2026-01-03 00:56:48.147462 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:56:48.147465 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.490) 0:10:13.392 ****** 2026-01-03 00:56:48.147468 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147471 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147474 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147477 | orchestrator | 2026-01-03 00:56:48.147480 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:56:48.147483 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.254) 0:10:13.647 ****** 2026-01-03 00:56:48.147486 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147490 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147493 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147496 | orchestrator | 2026-01-03 00:56:48.147499 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:56:48.147502 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.303) 0:10:13.951 ****** 2026-01-03 00:56:48.147505 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.147508 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.147511 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.147514 | orchestrator | 2026-01-03 00:56:48.147517 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-03 00:56:48.147520 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.712) 0:10:14.663 ****** 2026-01-03 00:56:48.147526 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.147529 | orchestrator | 2026-01-03 00:56:48.147532 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-03 00:56:48.147535 | orchestrator | Saturday 03 January 2026 00:55:49 +0000 (0:00:00.555) 0:10:15.219 ****** 2026-01-03 00:56:48.147538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147541 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.147544 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.147547 | orchestrator | 2026-01-03 00:56:48.147550 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:56:48.147553 | orchestrator | Saturday 03 January 2026 00:55:51 +0000 (0:00:01.917) 0:10:17.136 ****** 2026-01-03 00:56:48.147557 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:56:48.147560 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:56:48.147563 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.147566 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:56:48.147569 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:56:48.147572 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.147575 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:56:48.147578 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:56:48.147581 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.147584 | orchestrator | 2026-01-03 00:56:48.147587 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-03 00:56:48.147591 | orchestrator | Saturday 03 January 2026 00:55:52 +0000 (0:00:01.291) 0:10:18.427 ****** 2026-01-03 00:56:48.147594 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147597 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147600 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147605 | orchestrator | 2026-01-03 00:56:48.147608 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-03 00:56:48.147611 | orchestrator | Saturday 03 January 2026 00:55:52 +0000 (0:00:00.276) 0:10:18.704 ****** 2026-01-03 00:56:48.147614 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.147617 | orchestrator | 2026-01-03 00:56:48.147622 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-03 00:56:48.147627 | orchestrator | Saturday 03 January 2026 00:55:53 +0000 (0:00:00.477) 0:10:19.182 ****** 2026-01-03 00:56:48.147632 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.147637 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.147642 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.147647 | orchestrator | 2026-01-03 00:56:48.147651 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-03 00:56:48.147658 | orchestrator | Saturday 03 January 2026 00:55:54 +0000 (0:00:01.146) 0:10:20.328 ****** 2026-01-03 00:56:48.147663 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147669 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:56:48.147674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147680 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:56:48.147685 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147691 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:56:48.147694 | orchestrator | 2026-01-03 00:56:48.147697 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-03 00:56:48.147700 | orchestrator | Saturday 03 January 2026 00:55:58 +0000 (0:00:04.041) 0:10:24.369 ****** 2026-01-03 00:56:48.147703 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147706 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.147709 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147712 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.147716 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:56:48.147719 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:56:48.147722 | orchestrator | 2026-01-03 00:56:48.147726 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:56:48.147732 | orchestrator | Saturday 03 January 2026 00:56:00 +0000 (0:00:02.372) 0:10:26.742 ****** 2026-01-03 00:56:48.147737 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:56:48.147742 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.147747 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:56:48.147753 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.147758 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:56:48.147763 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.147768 | orchestrator | 2026-01-03 00:56:48.147776 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-03 00:56:48.147781 | orchestrator | Saturday 03 January 2026 00:56:01 +0000 (0:00:01.087) 0:10:27.829 ****** 2026-01-03 00:56:48.147787 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-03 00:56:48.147790 | orchestrator | 2026-01-03 00:56:48.147793 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-03 00:56:48.147797 | orchestrator | Saturday 03 January 2026 00:56:02 +0000 (0:00:00.213) 0:10:28.043 ****** 2026-01-03 00:56:48.147800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147841 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147849 | orchestrator | 2026-01-03 00:56:48.147854 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-03 00:56:48.147859 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:01.017) 0:10:29.060 ****** 2026-01-03 00:56:48.147864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:56:48.147888 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147891 | orchestrator | 2026-01-03 00:56:48.147894 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-03 00:56:48.147898 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:00.565) 0:10:29.626 ****** 2026-01-03 00:56:48.147902 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:56:48.147908 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:56:48.147913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:56:48.147919 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:56:48.147924 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:56:48.147929 | orchestrator | 2026-01-03 00:56:48.147935 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-03 00:56:48.147940 | orchestrator | Saturday 03 January 2026 00:56:31 +0000 (0:00:28.217) 0:10:57.843 ****** 2026-01-03 00:56:48.147945 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147950 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147958 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147964 | orchestrator | 2026-01-03 00:56:48.147969 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-03 00:56:48.147974 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:00.300) 0:10:58.144 ****** 2026-01-03 00:56:48.147979 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.147984 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.147991 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.147996 | orchestrator | 2026-01-03 00:56:48.148001 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-03 00:56:48.148006 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:00.302) 0:10:58.446 ****** 2026-01-03 00:56:48.148012 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.148017 | orchestrator | 2026-01-03 00:56:48.148022 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-03 00:56:48.148027 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.751) 0:10:59.197 ****** 2026-01-03 00:56:48.148036 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.148042 | orchestrator | 2026-01-03 00:56:48.148047 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-03 00:56:48.148052 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.524) 0:10:59.722 ****** 2026-01-03 00:56:48.148057 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.148062 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.148067 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.148072 | orchestrator | 2026-01-03 00:56:48.148077 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-03 00:56:48.148083 | orchestrator | Saturday 03 January 2026 00:56:35 +0000 (0:00:01.408) 0:11:01.131 ****** 2026-01-03 00:56:48.148088 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.148093 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.148098 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.148103 | orchestrator | 2026-01-03 00:56:48.148109 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-03 00:56:48.148114 | orchestrator | Saturday 03 January 2026 00:56:36 +0000 (0:00:01.655) 0:11:02.786 ****** 2026-01-03 00:56:48.148119 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:56:48.148124 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:56:48.148129 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:56:48.148134 | orchestrator | 2026-01-03 00:56:48.148139 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-03 00:56:48.148144 | orchestrator | Saturday 03 January 2026 00:56:39 +0000 (0:00:02.802) 0:11:05.588 ****** 2026-01-03 00:56:48.148150 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.148155 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.148160 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:56:48.148165 | orchestrator | 2026-01-03 00:56:48.148170 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:56:48.148176 | orchestrator | Saturday 03 January 2026 00:56:42 +0000 (0:00:02.521) 0:11:08.110 ****** 2026-01-03 00:56:48.148181 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.148186 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.148191 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.148196 | orchestrator | 2026-01-03 00:56:48.148201 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-03 00:56:48.148206 | orchestrator | Saturday 03 January 2026 00:56:42 +0000 (0:00:00.361) 0:11:08.472 ****** 2026-01-03 00:56:48.148214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:56:48.148219 | orchestrator | 2026-01-03 00:56:48.148224 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-03 00:56:48.148230 | orchestrator | Saturday 03 January 2026 00:56:43 +0000 (0:00:00.523) 0:11:08.995 ****** 2026-01-03 00:56:48.148235 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.148240 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.148245 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.148250 | orchestrator | 2026-01-03 00:56:48.148257 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-03 00:56:48.148263 | orchestrator | Saturday 03 January 2026 00:56:43 +0000 (0:00:00.516) 0:11:09.512 ****** 2026-01-03 00:56:48.148268 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.148273 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:56:48.148278 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:56:48.148283 | orchestrator | 2026-01-03 00:56:48.148288 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-03 00:56:48.148294 | orchestrator | Saturday 03 January 2026 00:56:43 +0000 (0:00:00.340) 0:11:09.852 ****** 2026-01-03 00:56:48.148299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:56:48.148304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:56:48.148309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:56:48.148314 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:56:48.148319 | orchestrator | 2026-01-03 00:56:48.148324 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-03 00:56:48.148330 | orchestrator | Saturday 03 January 2026 00:56:44 +0000 (0:00:00.592) 0:11:10.445 ****** 2026-01-03 00:56:48.148335 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:56:48.148340 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:56:48.148345 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:56:48.148351 | orchestrator | 2026-01-03 00:56:48.148356 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:56:48.148361 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-03 00:56:48.148366 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-03 00:56:48.148371 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-03 00:56:48.148377 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-03 00:56:48.148382 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-03 00:56:48.148389 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-03 00:56:48.148395 | orchestrator | 2026-01-03 00:56:48.148400 | orchestrator | 2026-01-03 00:56:48.148405 | orchestrator | 2026-01-03 00:56:48.148410 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:56:48.148415 | orchestrator | Saturday 03 January 2026 00:56:44 +0000 (0:00:00.240) 0:11:10.685 ****** 2026-01-03 00:56:48.148421 | orchestrator | =============================================================================== 2026-01-03 00:56:48.148426 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 83.97s 2026-01-03 00:56:48.148431 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.40s 2026-01-03 00:56:48.148436 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.02s 2026-01-03 00:56:48.148444 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.22s 2026-01-03 00:56:48.148449 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.95s 2026-01-03 00:56:48.148455 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.34s 2026-01-03 00:56:48.148460 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.46s 2026-01-03 00:56:48.148465 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.21s 2026-01-03 00:56:48.148470 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.53s 2026-01-03 00:56:48.148475 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.04s 2026-01-03 00:56:48.148480 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.75s 2026-01-03 00:56:48.148485 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.51s 2026-01-03 00:56:48.148490 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.05s 2026-01-03 00:56:48.148495 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.50s 2026-01-03 00:56:48.148501 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.12s 2026-01-03 00:56:48.148506 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.08s 2026-01-03 00:56:48.148511 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.04s 2026-01-03 00:56:48.148516 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.81s 2026-01-03 00:56:48.148521 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.64s 2026-01-03 00:56:48.148526 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.62s 2026-01-03 00:56:48.148531 | orchestrator | 2026-01-03 00:56:48 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:48.148536 | orchestrator | 2026-01-03 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:51.192705 | orchestrator | 2026-01-03 00:56:51 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:51.194752 | orchestrator | 2026-01-03 00:56:51 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:56:51.198185 | orchestrator | 2026-01-03 00:56:51 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:51.198318 | orchestrator | 2026-01-03 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:54.243111 | orchestrator | 2026-01-03 00:56:54 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:54.243271 | orchestrator | 2026-01-03 00:56:54 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:56:54.244360 | orchestrator | 2026-01-03 00:56:54 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:54.244429 | orchestrator | 2026-01-03 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:57.301640 | orchestrator | 2026-01-03 00:56:57 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:56:57.301698 | orchestrator | 2026-01-03 00:56:57 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:56:57.301708 | orchestrator | 2026-01-03 00:56:57 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:56:57.301714 | orchestrator | 2026-01-03 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:00.352913 | orchestrator | 2026-01-03 00:57:00 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state STARTED 2026-01-03 00:57:00.360051 | orchestrator | 2026-01-03 00:57:00 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:00.363914 | orchestrator | 2026-01-03 00:57:00 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:00.363968 | orchestrator | 2026-01-03 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:03.410297 | orchestrator | 2026-01-03 00:57:03.410350 | orchestrator | 2026-01-03 00:57:03.410356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:57:03.410361 | orchestrator | 2026-01-03 00:57:03.410365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:57:03.410369 | orchestrator | Saturday 03 January 2026 00:54:26 +0000 (0:00:00.280) 0:00:00.280 ****** 2026-01-03 00:57:03.410406 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:03.410412 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:03.410416 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:03.410419 | orchestrator | 2026-01-03 00:57:03.410423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:57:03.410427 | orchestrator | Saturday 03 January 2026 00:54:27 +0000 (0:00:00.353) 0:00:00.634 ****** 2026-01-03 00:57:03.410432 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-03 00:57:03.410519 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-03 00:57:03.410526 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-03 00:57:03.410530 | orchestrator | 2026-01-03 00:57:03.410533 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-03 00:57:03.410537 | orchestrator | 2026-01-03 00:57:03.410541 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:57:03.410545 | orchestrator | Saturday 03 January 2026 00:54:27 +0000 (0:00:00.410) 0:00:01.044 ****** 2026-01-03 00:57:03.410549 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:03.410553 | orchestrator | 2026-01-03 00:57:03.410557 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-03 00:57:03.410560 | orchestrator | Saturday 03 January 2026 00:54:27 +0000 (0:00:00.484) 0:00:01.529 ****** 2026-01-03 00:57:03.410565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:57:03.410569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:57:03.410572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:57:03.410576 | orchestrator | 2026-01-03 00:57:03.410580 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-03 00:57:03.410584 | orchestrator | Saturday 03 January 2026 00:54:28 +0000 (0:00:00.730) 0:00:02.259 ****** 2026-01-03 00:57:03.410633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410688 | orchestrator | 2026-01-03 00:57:03.410692 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:57:03.410696 | orchestrator | Saturday 03 January 2026 00:54:30 +0000 (0:00:01.763) 0:00:04.022 ****** 2026-01-03 00:57:03.410700 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:03.410703 | orchestrator | 2026-01-03 00:57:03.410707 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-03 00:57:03.410715 | orchestrator | Saturday 03 January 2026 00:54:30 +0000 (0:00:00.514) 0:00:04.537 ****** 2026-01-03 00:57:03.410719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410753 | orchestrator | 2026-01-03 00:57:03.410757 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-03 00:57:03.410760 | orchestrator | Saturday 03 January 2026 00:54:33 +0000 (0:00:02.556) 0:00:07.094 ****** 2026-01-03 00:57:03.410768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410781 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.410785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410802 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:03.410808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410813 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:03.410816 | orchestrator | 2026-01-03 00:57:03.410820 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-03 00:57:03.410824 | orchestrator | Saturday 03 January 2026 00:54:34 +0000 (0:00:01.111) 0:00:08.205 ****** 2026-01-03 00:57:03.410828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410846 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:03.410853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410857 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.410861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.410868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.410875 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:03.410879 | orchestrator | 2026-01-03 00:57:03.410914 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-03 00:57:03.410918 | orchestrator | Saturday 03 January 2026 00:54:35 +0000 (0:00:01.051) 0:00:09.257 ****** 2026-01-03 00:57:03.410927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.410946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localt2026-01-03 00:57:03 | INFO  | Task bcf712a4-6024-402f-aec1-7365fc3d186a is in state SUCCESS 2026-01-03 00:57:03.410964 | orchestrator | 2026-01-03 00:57:03 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:03.410969 | orchestrator | ime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.410973 | orchestrator | 2026-01-03 00:57:03.410977 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-03 00:57:03.410981 | orchestrator | Saturday 03 January 2026 00:54:37 +0000 (0:00:02.316) 0:00:11.573 ****** 2026-01-03 00:57:03.410985 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:03.410991 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.410995 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:03.410998 | orchestrator | 2026-01-03 00:57:03.411002 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-03 00:57:03.411006 | orchestrator | Saturday 03 January 2026 00:54:40 +0000 (0:00:02.291) 0:00:13.865 ****** 2026-01-03 00:57:03.411010 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.411013 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:03.411017 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:03.411021 | orchestrator | 2026-01-03 00:57:03.411025 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-03 00:57:03.411029 | orchestrator | Saturday 03 January 2026 00:54:42 +0000 (0:00:01.856) 0:00:15.721 ****** 2026-01-03 00:57:03.411034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.411039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.411047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 00:57:03.411054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.411067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.411076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-03 00:57:03.411083 | orchestrator | 2026-01-03 00:57:03.411089 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-03 00:57:03.411096 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:02.187) 0:00:17.909 ****** 2026-01-03 00:57:03.411103 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:57:03.411109 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:03.411116 | orchestrator | } 2026-01-03 00:57:03.411122 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:57:03.411132 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:03.411139 | orchestrator | } 2026-01-03 00:57:03.411146 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:57:03.411150 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:03.411154 | orchestrator | } 2026-01-03 00:57:03.411160 | orchestrator | 2026-01-03 00:57:03.411167 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:57:03.411173 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:00.273) 0:00:18.183 ****** 2026-01-03 00:57:03.411185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.411192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.411202 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.411210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.411221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.411229 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:03.411234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 00:57:03.411241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-03 00:57:03.411246 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:03.411251 | orchestrator | 2026-01-03 00:57:03.411255 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:57:03.411260 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:01.628) 0:00:19.811 ****** 2026-01-03 00:57:03.411265 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.411269 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:03.411274 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:03.411278 | orchestrator | 2026-01-03 00:57:03.411283 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:57:03.411287 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.257) 0:00:20.068 ****** 2026-01-03 00:57:03.411291 | orchestrator | 2026-01-03 00:57:03.411296 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:57:03.411300 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.058) 0:00:20.127 ****** 2026-01-03 00:57:03.411305 | orchestrator | 2026-01-03 00:57:03.411309 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:57:03.411314 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.058) 0:00:20.185 ****** 2026-01-03 00:57:03.411318 | orchestrator | 2026-01-03 00:57:03.411323 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-03 00:57:03.411327 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.062) 0:00:20.248 ****** 2026-01-03 00:57:03.411332 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.411338 | orchestrator | 2026-01-03 00:57:03.411345 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-03 00:57:03.411356 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.178) 0:00:20.427 ****** 2026-01-03 00:57:03.411363 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:03.411370 | orchestrator | 2026-01-03 00:57:03.411376 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-03 00:57:03.411383 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:00.176) 0:00:20.603 ****** 2026-01-03 00:57:03.411390 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.411397 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:03.411404 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:03.411411 | orchestrator | 2026-01-03 00:57:03.411418 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-03 00:57:03.411424 | orchestrator | Saturday 03 January 2026 00:55:40 +0000 (0:00:53.477) 0:01:14.081 ****** 2026-01-03 00:57:03.411432 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.411436 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:03.411441 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:03.411446 | orchestrator | 2026-01-03 00:57:03.411450 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:57:03.411455 | orchestrator | Saturday 03 January 2026 00:56:51 +0000 (0:01:10.831) 0:02:24.912 ****** 2026-01-03 00:57:03.411459 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:03.411464 | orchestrator | 2026-01-03 00:57:03.411469 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-03 00:57:03.411473 | orchestrator | Saturday 03 January 2026 00:56:51 +0000 (0:00:00.506) 0:02:25.418 ****** 2026-01-03 00:57:03.411478 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:03.411483 | orchestrator | 2026-01-03 00:57:03.411487 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-03 00:57:03.411492 | orchestrator | Saturday 03 January 2026 00:56:54 +0000 (0:00:02.240) 0:02:27.659 ****** 2026-01-03 00:57:03.411496 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:03.411501 | orchestrator | 2026-01-03 00:57:03.411505 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-03 00:57:03.411508 | orchestrator | Saturday 03 January 2026 00:56:56 +0000 (0:00:02.282) 0:02:29.942 ****** 2026-01-03 00:57:03.411512 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.411516 | orchestrator | 2026-01-03 00:57:03.411520 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-03 00:57:03.411523 | orchestrator | Saturday 03 January 2026 00:56:59 +0000 (0:00:02.956) 0:02:32.898 ****** 2026-01-03 00:57:03.411527 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:03.411531 | orchestrator | 2026-01-03 00:57:03.411534 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:57:03.411539 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-03 00:57:03.411543 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:57:03.411547 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:57:03.411551 | orchestrator | 2026-01-03 00:57:03.411554 | orchestrator | 2026-01-03 00:57:03.411558 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:57:03.411562 | orchestrator | Saturday 03 January 2026 00:57:01 +0000 (0:00:02.343) 0:02:35.242 ****** 2026-01-03 00:57:03.411566 | orchestrator | =============================================================================== 2026-01-03 00:57:03.411569 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.83s 2026-01-03 00:57:03.411573 | orchestrator | opensearch : Restart opensearch container ------------------------------ 53.48s 2026-01-03 00:57:03.411577 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.96s 2026-01-03 00:57:03.411584 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.56s 2026-01-03 00:57:03.411588 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.34s 2026-01-03 00:57:03.411593 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2026-01-03 00:57:03.411597 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.29s 2026-01-03 00:57:03.411601 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.28s 2026-01-03 00:57:03.411605 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.24s 2026-01-03 00:57:03.411608 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.19s 2026-01-03 00:57:03.411612 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.86s 2026-01-03 00:57:03.411616 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.76s 2026-01-03 00:57:03.411619 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.63s 2026-01-03 00:57:03.411623 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.11s 2026-01-03 00:57:03.411627 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.05s 2026-01-03 00:57:03.411630 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.73s 2026-01-03 00:57:03.411634 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-03 00:57:03.411638 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-03 00:57:03.411642 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-01-03 00:57:03.411645 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-01-03 00:57:03.411649 | orchestrator | 2026-01-03 00:57:03 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:03.411653 | orchestrator | 2026-01-03 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:06.445478 | orchestrator | 2026-01-03 00:57:06 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:06.447270 | orchestrator | 2026-01-03 00:57:06 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:06.447360 | orchestrator | 2026-01-03 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:09.477683 | orchestrator | 2026-01-03 00:57:09 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:09.479854 | orchestrator | 2026-01-03 00:57:09 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:09.479978 | orchestrator | 2026-01-03 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:12.529575 | orchestrator | 2026-01-03 00:57:12 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:12.529631 | orchestrator | 2026-01-03 00:57:12 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:12.529640 | orchestrator | 2026-01-03 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:15.584169 | orchestrator | 2026-01-03 00:57:15 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:15.588439 | orchestrator | 2026-01-03 00:57:15 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:15.588490 | orchestrator | 2026-01-03 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:18.636655 | orchestrator | 2026-01-03 00:57:18 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:18.639128 | orchestrator | 2026-01-03 00:57:18 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:18.639278 | orchestrator | 2026-01-03 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:21.683621 | orchestrator | 2026-01-03 00:57:21 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:21.686178 | orchestrator | 2026-01-03 00:57:21 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:21.686234 | orchestrator | 2026-01-03 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:24.738950 | orchestrator | 2026-01-03 00:57:24 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:24.740655 | orchestrator | 2026-01-03 00:57:24 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:24.740744 | orchestrator | 2026-01-03 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:27.779789 | orchestrator | 2026-01-03 00:57:27 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:27.779857 | orchestrator | 2026-01-03 00:57:27 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:27.779879 | orchestrator | 2026-01-03 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:30.819818 | orchestrator | 2026-01-03 00:57:30 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:30.821481 | orchestrator | 2026-01-03 00:57:30 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state STARTED 2026-01-03 00:57:30.821553 | orchestrator | 2026-01-03 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:33.863233 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:33.864594 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:33.868258 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task 0f49e5e7-1e05-4be1-ad4a-c0afef8ba11c is in state SUCCESS 2026-01-03 00:57:33.868333 | orchestrator | 2026-01-03 00:57:33.869830 | orchestrator | 2026-01-03 00:57:33.869951 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-03 00:57:33.869962 | orchestrator | 2026-01-03 00:57:33.869969 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-03 00:57:33.869976 | orchestrator | Saturday 03 January 2026 00:54:26 +0000 (0:00:00.086) 0:00:00.086 ****** 2026-01-03 00:57:33.869982 | orchestrator | ok: [localhost] => { 2026-01-03 00:57:33.869989 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-03 00:57:33.869996 | orchestrator | } 2026-01-03 00:57:33.870068 | orchestrator | 2026-01-03 00:57:33.870076 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-03 00:57:33.870082 | orchestrator | Saturday 03 January 2026 00:54:26 +0000 (0:00:00.050) 0:00:00.137 ****** 2026-01-03 00:57:33.870089 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-03 00:57:33.870096 | orchestrator | ...ignoring 2026-01-03 00:57:33.870102 | orchestrator | 2026-01-03 00:57:33.870109 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-03 00:57:33.870115 | orchestrator | Saturday 03 January 2026 00:54:29 +0000 (0:00:02.860) 0:00:02.997 ****** 2026-01-03 00:57:33.870121 | orchestrator | skipping: [localhost] 2026-01-03 00:57:33.870128 | orchestrator | 2026-01-03 00:57:33.870136 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-03 00:57:33.870150 | orchestrator | Saturday 03 January 2026 00:54:29 +0000 (0:00:00.064) 0:00:03.061 ****** 2026-01-03 00:57:33.870190 | orchestrator | ok: [localhost] 2026-01-03 00:57:33.870201 | orchestrator | 2026-01-03 00:57:33.870509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:57:33.870531 | orchestrator | 2026-01-03 00:57:33.870542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:57:33.870553 | orchestrator | Saturday 03 January 2026 00:54:29 +0000 (0:00:00.144) 0:00:03.206 ****** 2026-01-03 00:57:33.870565 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.870577 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.870588 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.870599 | orchestrator | 2026-01-03 00:57:33.870606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:57:33.870612 | orchestrator | Saturday 03 January 2026 00:54:29 +0000 (0:00:00.302) 0:00:03.509 ****** 2026-01-03 00:57:33.870618 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-03 00:57:33.870625 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-03 00:57:33.870631 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-03 00:57:33.870638 | orchestrator | 2026-01-03 00:57:33.870644 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-03 00:57:33.870655 | orchestrator | 2026-01-03 00:57:33.870672 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-03 00:57:33.870682 | orchestrator | Saturday 03 January 2026 00:54:30 +0000 (0:00:00.530) 0:00:04.040 ****** 2026-01-03 00:57:33.870693 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:57:33.870702 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-03 00:57:33.870713 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-03 00:57:33.870723 | orchestrator | 2026-01-03 00:57:33.870733 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:57:33.870763 | orchestrator | Saturday 03 January 2026 00:54:30 +0000 (0:00:00.344) 0:00:04.385 ****** 2026-01-03 00:57:33.870772 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.870779 | orchestrator | 2026-01-03 00:57:33.870785 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-03 00:57:33.870791 | orchestrator | Saturday 03 January 2026 00:54:31 +0000 (0:00:00.581) 0:00:04.966 ****** 2026-01-03 00:57:33.870847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.870868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.870879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.870886 | orchestrator | 2026-01-03 00:57:33.870911 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-03 00:57:33.870920 | orchestrator | Saturday 03 January 2026 00:54:34 +0000 (0:00:02.687) 0:00:07.654 ****** 2026-01-03 00:57:33.870937 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.870949 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.870960 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.870971 | orchestrator | 2026-01-03 00:57:33.870981 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-03 00:57:33.870992 | orchestrator | Saturday 03 January 2026 00:54:34 +0000 (0:00:00.660) 0:00:08.315 ****** 2026-01-03 00:57:33.871022 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871033 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871043 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.871054 | orchestrator | 2026-01-03 00:57:33.871061 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-03 00:57:33.871067 | orchestrator | Saturday 03 January 2026 00:54:36 +0000 (0:00:01.492) 0:00:09.807 ****** 2026-01-03 00:57:33.871074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871134 | orchestrator | 2026-01-03 00:57:33.871147 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-03 00:57:33.871163 | orchestrator | Saturday 03 January 2026 00:54:39 +0000 (0:00:03.025) 0:00:12.833 ****** 2026-01-03 00:57:33.871174 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871185 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871196 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.871208 | orchestrator | 2026-01-03 00:57:33.871221 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-03 00:57:33.871233 | orchestrator | Saturday 03 January 2026 00:54:40 +0000 (0:00:01.181) 0:00:14.015 ****** 2026-01-03 00:57:33.871239 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.871246 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.871252 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.871258 | orchestrator | 2026-01-03 00:57:33.871264 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:57:33.871270 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:03.584) 0:00:17.599 ****** 2026-01-03 00:57:33.871277 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.871283 | orchestrator | 2026-01-03 00:57:33.871289 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-03 00:57:33.871295 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:00.448) 0:00:18.047 ****** 2026-01-03 00:57:33.871312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871325 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871338 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871366 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871372 | orchestrator | 2026-01-03 00:57:33.871378 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-03 00:57:33.871384 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:02.623) 0:00:20.671 ****** 2026-01-03 00:57:33.871391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871398 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871422 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871435 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871441 | orchestrator | 2026-01-03 00:57:33.871447 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-03 00:57:33.871453 | orchestrator | Saturday 03 January 2026 00:54:50 +0000 (0:00:02.945) 0:00:23.616 ****** 2026-01-03 00:57:33.871463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871473 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871491 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871514 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871520 | orchestrator | 2026-01-03 00:57:33.871526 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-03 00:57:33.871532 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:01.928) 0:00:25.544 ****** 2026-01-03 00:57:33.871544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:57:33.871577 | orchestrator | 2026-01-03 00:57:33.871583 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-03 00:57:33.871589 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:03.026) 0:00:28.571 ****** 2026-01-03 00:57:33.871596 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:57:33.871602 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:33.871608 | orchestrator | } 2026-01-03 00:57:33.871614 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:57:33.871620 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:33.871626 | orchestrator | } 2026-01-03 00:57:33.871632 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:57:33.871638 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:57:33.871645 | orchestrator | } 2026-01-03 00:57:33.871651 | orchestrator | 2026-01-03 00:57:33.871657 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:57:33.871663 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:00.474) 0:00:29.046 ****** 2026-01-03 00:57:33.871672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871682 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871701 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.871718 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871724 | orchestrator | 2026-01-03 00:57:33.871730 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-03 00:57:33.871736 | orchestrator | Saturday 03 January 2026 00:54:57 +0000 (0:00:02.493) 0:00:31.539 ****** 2026-01-03 00:57:33.871742 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871748 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871757 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871763 | orchestrator | 2026-01-03 00:57:33.871769 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-03 00:57:33.871775 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:00.284) 0:00:31.824 ****** 2026-01-03 00:57:33.871781 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871788 | orchestrator | 2026-01-03 00:57:33.871794 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-03 00:57:33.871800 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:00.090) 0:00:31.914 ****** 2026-01-03 00:57:33.871806 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871812 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871818 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871824 | orchestrator | 2026-01-03 00:57:33.871830 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-03 00:57:33.871837 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:00.389) 0:00:32.303 ****** 2026-01-03 00:57:33.871846 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871852 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871858 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871864 | orchestrator | 2026-01-03 00:57:33.871873 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-03 00:57:33.871884 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.274) 0:00:32.578 ****** 2026-01-03 00:57:33.871891 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871897 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871903 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871909 | orchestrator | 2026-01-03 00:57:33.871915 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-03 00:57:33.871921 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.283) 0:00:32.862 ****** 2026-01-03 00:57:33.871927 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871934 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871940 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871946 | orchestrator | 2026-01-03 00:57:33.871952 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-03 00:57:33.871961 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.265) 0:00:33.127 ****** 2026-01-03 00:57:33.871967 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.871974 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.871980 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.871986 | orchestrator | 2026-01-03 00:57:33.871992 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-03 00:57:33.872014 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.415) 0:00:33.543 ****** 2026-01-03 00:57:33.872023 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872034 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872045 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872056 | orchestrator | 2026-01-03 00:57:33.872066 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-03 00:57:33.872077 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.326) 0:00:33.870 ****** 2026-01-03 00:57:33.872088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.872099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.872110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.872116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-03 00:57:33.872122 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-03 00:57:33.872129 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-03 00:57:33.872138 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872148 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872158 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-03 00:57:33.872167 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-03 00:57:33.872177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-03 00:57:33.872186 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872196 | orchestrator | 2026-01-03 00:57:33.872207 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-03 00:57:33.872218 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.396) 0:00:34.267 ****** 2026-01-03 00:57:33.872228 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872239 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872246 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872252 | orchestrator | 2026-01-03 00:57:33.872259 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-03 00:57:33.872265 | orchestrator | Saturday 03 January 2026 00:55:01 +0000 (0:00:00.302) 0:00:34.569 ****** 2026-01-03 00:57:33.872271 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872277 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872283 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872290 | orchestrator | 2026-01-03 00:57:33.872299 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-03 00:57:33.872375 | orchestrator | Saturday 03 January 2026 00:55:01 +0000 (0:00:00.562) 0:00:35.132 ****** 2026-01-03 00:57:33.872390 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872401 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872411 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872420 | orchestrator | 2026-01-03 00:57:33.872426 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-03 00:57:33.872432 | orchestrator | Saturday 03 January 2026 00:55:01 +0000 (0:00:00.331) 0:00:35.463 ****** 2026-01-03 00:57:33.872439 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872445 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872451 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872457 | orchestrator | 2026-01-03 00:57:33.872463 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-03 00:57:33.872470 | orchestrator | Saturday 03 January 2026 00:55:02 +0000 (0:00:00.387) 0:00:35.851 ****** 2026-01-03 00:57:33.872483 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872493 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872500 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872506 | orchestrator | 2026-01-03 00:57:33.872512 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-03 00:57:33.872518 | orchestrator | Saturday 03 January 2026 00:55:02 +0000 (0:00:00.461) 0:00:36.313 ****** 2026-01-03 00:57:33.872547 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872555 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872561 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872567 | orchestrator | 2026-01-03 00:57:33.872573 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-03 00:57:33.872580 | orchestrator | Saturday 03 January 2026 00:55:03 +0000 (0:00:00.551) 0:00:36.864 ****** 2026-01-03 00:57:33.872586 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872592 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872598 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872604 | orchestrator | 2026-01-03 00:57:33.872611 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-03 00:57:33.872623 | orchestrator | Saturday 03 January 2026 00:55:03 +0000 (0:00:00.357) 0:00:37.222 ****** 2026-01-03 00:57:33.872630 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872636 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872642 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872649 | orchestrator | 2026-01-03 00:57:33.872655 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-03 00:57:33.872661 | orchestrator | Saturday 03 January 2026 00:55:04 +0000 (0:00:00.328) 0:00:37.551 ****** 2026-01-03 00:57:33.872669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872676 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872712 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872718 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872724 | orchestrator | 2026-01-03 00:57:33.872731 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-03 00:57:33.872737 | orchestrator | Saturday 03 January 2026 00:55:06 +0000 (0:00:02.093) 0:00:39.645 ****** 2026-01-03 00:57:33.872743 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872749 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872755 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872761 | orchestrator | 2026-01-03 00:57:33.872768 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-03 00:57:33.872796 | orchestrator | Saturday 03 January 2026 00:55:06 +0000 (0:00:00.335) 0:00:39.980 ****** 2026-01-03 00:57:33.872807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872815 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872834 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:57:33.872857 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872864 | orchestrator | 2026-01-03 00:57:33.872870 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-03 00:57:33.872876 | orchestrator | Saturday 03 January 2026 00:55:08 +0000 (0:00:02.244) 0:00:42.225 ****** 2026-01-03 00:57:33.872882 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872888 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872894 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872900 | orchestrator | 2026-01-03 00:57:33.872907 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-03 00:57:33.872917 | orchestrator | Saturday 03 January 2026 00:55:08 +0000 (0:00:00.306) 0:00:42.531 ****** 2026-01-03 00:57:33.872923 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872930 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872936 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872942 | orchestrator | 2026-01-03 00:57:33.872948 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-03 00:57:33.872955 | orchestrator | Saturday 03 January 2026 00:55:09 +0000 (0:00:00.308) 0:00:42.840 ****** 2026-01-03 00:57:33.872961 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.872967 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.872973 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.872979 | orchestrator | 2026-01-03 00:57:33.872986 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-03 00:57:33.872992 | orchestrator | Saturday 03 January 2026 00:55:09 +0000 (0:00:00.306) 0:00:43.146 ****** 2026-01-03 00:57:33.873022 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873030 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873036 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873042 | orchestrator | 2026-01-03 00:57:33.873048 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-03 00:57:33.873054 | orchestrator | Saturday 03 January 2026 00:55:10 +0000 (0:00:00.660) 0:00:43.807 ****** 2026-01-03 00:57:33.873061 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873067 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873073 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873079 | orchestrator | 2026-01-03 00:57:33.873085 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-03 00:57:33.873096 | orchestrator | Saturday 03 January 2026 00:55:10 +0000 (0:00:00.319) 0:00:44.126 ****** 2026-01-03 00:57:33.873102 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.873128 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.873137 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.873148 | orchestrator | 2026-01-03 00:57:33.873157 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-03 00:57:33.873166 | orchestrator | Saturday 03 January 2026 00:55:11 +0000 (0:00:00.934) 0:00:45.061 ****** 2026-01-03 00:57:33.873176 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873185 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.873198 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.873212 | orchestrator | 2026-01-03 00:57:33.873222 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-03 00:57:33.873233 | orchestrator | Saturday 03 January 2026 00:55:12 +0000 (0:00:00.495) 0:00:45.556 ****** 2026-01-03 00:57:33.873243 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873253 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.873262 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.873272 | orchestrator | 2026-01-03 00:57:33.873282 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-03 00:57:33.873291 | orchestrator | Saturday 03 January 2026 00:55:12 +0000 (0:00:00.322) 0:00:45.879 ****** 2026-01-03 00:57:33.873302 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-03 00:57:33.873312 | orchestrator | ...ignoring 2026-01-03 00:57:33.873322 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-03 00:57:33.873332 | orchestrator | ...ignoring 2026-01-03 00:57:33.873342 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-03 00:57:33.873352 | orchestrator | ...ignoring 2026-01-03 00:57:33.873362 | orchestrator | 2026-01-03 00:57:33.873372 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-03 00:57:33.873382 | orchestrator | Saturday 03 January 2026 00:55:23 +0000 (0:00:10.843) 0:00:56.722 ****** 2026-01-03 00:57:33.873393 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873404 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.873414 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.873424 | orchestrator | 2026-01-03 00:57:33.873434 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-03 00:57:33.873444 | orchestrator | Saturday 03 January 2026 00:55:23 +0000 (0:00:00.348) 0:00:57.071 ****** 2026-01-03 00:57:33.873454 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873465 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873475 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873486 | orchestrator | 2026-01-03 00:57:33.873496 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-03 00:57:33.873508 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:00.476) 0:00:57.548 ****** 2026-01-03 00:57:33.873514 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873520 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873526 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873533 | orchestrator | 2026-01-03 00:57:33.873551 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-03 00:57:33.873566 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:00.328) 0:00:57.876 ****** 2026-01-03 00:57:33.873576 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873586 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873597 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873607 | orchestrator | 2026-01-03 00:57:33.873617 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-03 00:57:33.873633 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:00.314) 0:00:58.191 ****** 2026-01-03 00:57:33.873643 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873652 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.873662 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.873673 | orchestrator | 2026-01-03 00:57:33.873684 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-03 00:57:33.873695 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:00.314) 0:00:58.505 ****** 2026-01-03 00:57:33.873706 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873728 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873739 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873750 | orchestrator | 2026-01-03 00:57:33.873761 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:57:33.873772 | orchestrator | Saturday 03 January 2026 00:55:25 +0000 (0:00:00.520) 0:00:59.026 ****** 2026-01-03 00:57:33.873782 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873789 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873795 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-03 00:57:33.873801 | orchestrator | 2026-01-03 00:57:33.873807 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-03 00:57:33.873813 | orchestrator | Saturday 03 January 2026 00:55:25 +0000 (0:00:00.383) 0:00:59.409 ****** 2026-01-03 00:57:33.873819 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.873826 | orchestrator | 2026-01-03 00:57:33.873832 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-03 00:57:33.873838 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:09.547) 0:01:08.957 ****** 2026-01-03 00:57:33.873844 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873850 | orchestrator | 2026-01-03 00:57:33.873856 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:57:33.873863 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:00.130) 0:01:09.087 ****** 2026-01-03 00:57:33.873869 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.873880 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.873889 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.873898 | orchestrator | 2026-01-03 00:57:33.873907 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-03 00:57:33.873915 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.827) 0:01:09.915 ****** 2026-01-03 00:57:33.873929 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.873943 | orchestrator | 2026-01-03 00:57:33.873954 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-03 00:57:33.873963 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:08.074) 0:01:17.989 ****** 2026-01-03 00:57:33.873973 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.873983 | orchestrator | 2026-01-03 00:57:33.873993 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-03 00:57:33.874064 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:01.659) 0:01:19.649 ****** 2026-01-03 00:57:33.874078 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.874088 | orchestrator | 2026-01-03 00:57:33.874098 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-03 00:57:33.874105 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:02.156) 0:01:21.805 ****** 2026-01-03 00:57:33.874111 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874117 | orchestrator | 2026-01-03 00:57:33.874123 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-03 00:57:33.874129 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.099) 0:01:21.904 ****** 2026-01-03 00:57:33.874137 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.874147 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874158 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874180 | orchestrator | 2026-01-03 00:57:33.874192 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-03 00:57:33.874203 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.268) 0:01:22.173 ****** 2026-01-03 00:57:33.874214 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.874225 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-03 00:57:33.874235 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.874245 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.874255 | orchestrator | 2026-01-03 00:57:33.874261 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-03 00:57:33.874268 | orchestrator | skipping: no hosts matched 2026-01-03 00:57:33.874274 | orchestrator | 2026-01-03 00:57:33.874280 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-03 00:57:33.874286 | orchestrator | 2026-01-03 00:57:33.874292 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:57:33.874299 | orchestrator | Saturday 03 January 2026 00:55:49 +0000 (0:00:00.404) 0:01:22.577 ****** 2026-01-03 00:57:33.874305 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.874311 | orchestrator | 2026-01-03 00:57:33.874318 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:57:33.874324 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:15.003) 0:01:37.581 ****** 2026-01-03 00:57:33.874330 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.874336 | orchestrator | 2026-01-03 00:57:33.874342 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:57:33.874348 | orchestrator | Saturday 03 January 2026 00:56:18 +0000 (0:00:14.653) 0:01:52.234 ****** 2026-01-03 00:57:33.874354 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.874360 | orchestrator | 2026-01-03 00:57:33.874367 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-03 00:57:33.874373 | orchestrator | 2026-01-03 00:57:33.874384 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:57:33.874390 | orchestrator | Saturday 03 January 2026 00:56:20 +0000 (0:00:01.979) 0:01:54.213 ****** 2026-01-03 00:57:33.874396 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.874402 | orchestrator | 2026-01-03 00:57:33.874408 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:57:33.874414 | orchestrator | Saturday 03 January 2026 00:56:43 +0000 (0:00:23.057) 0:02:17.271 ****** 2026-01-03 00:57:33.874421 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.874427 | orchestrator | 2026-01-03 00:57:33.874433 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:57:33.874439 | orchestrator | Saturday 03 January 2026 00:56:53 +0000 (0:00:09.593) 0:02:26.865 ****** 2026-01-03 00:57:33.874445 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.874451 | orchestrator | 2026-01-03 00:57:33.874457 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-03 00:57:33.874464 | orchestrator | 2026-01-03 00:57:33.874477 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:57:33.874484 | orchestrator | Saturday 03 January 2026 00:56:55 +0000 (0:00:02.085) 0:02:28.950 ****** 2026-01-03 00:57:33.874490 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874496 | orchestrator | 2026-01-03 00:57:33.874502 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:57:33.874509 | orchestrator | Saturday 03 January 2026 00:57:07 +0000 (0:00:11.835) 0:02:40.785 ****** 2026-01-03 00:57:33.874515 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.874521 | orchestrator | 2026-01-03 00:57:33.874527 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:57:33.874533 | orchestrator | Saturday 03 January 2026 00:57:11 +0000 (0:00:04.599) 0:02:45.385 ****** 2026-01-03 00:57:33.874540 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.874546 | orchestrator | 2026-01-03 00:57:33.874552 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-03 00:57:33.874563 | orchestrator | 2026-01-03 00:57:33.874569 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-03 00:57:33.874575 | orchestrator | Saturday 03 January 2026 00:57:14 +0000 (0:00:02.343) 0:02:47.728 ****** 2026-01-03 00:57:33.874581 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.874588 | orchestrator | 2026-01-03 00:57:33.874594 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-03 00:57:33.874600 | orchestrator | Saturday 03 January 2026 00:57:14 +0000 (0:00:00.527) 0:02:48.256 ****** 2026-01-03 00:57:33.874606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874612 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874619 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874625 | orchestrator | 2026-01-03 00:57:33.874631 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-03 00:57:33.874637 | orchestrator | Saturday 03 January 2026 00:57:16 +0000 (0:00:01.998) 0:02:50.254 ****** 2026-01-03 00:57:33.874644 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874650 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874656 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874662 | orchestrator | 2026-01-03 00:57:33.874668 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-03 00:57:33.874674 | orchestrator | Saturday 03 January 2026 00:57:18 +0000 (0:00:02.186) 0:02:52.441 ****** 2026-01-03 00:57:33.874681 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874687 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874693 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874699 | orchestrator | 2026-01-03 00:57:33.874705 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-03 00:57:33.874711 | orchestrator | Saturday 03 January 2026 00:57:20 +0000 (0:00:01.938) 0:02:54.379 ****** 2026-01-03 00:57:33.874718 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874724 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874730 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.874736 | orchestrator | 2026-01-03 00:57:33.874742 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-03 00:57:33.874749 | orchestrator | Saturday 03 January 2026 00:57:22 +0000 (0:00:01.936) 0:02:56.316 ****** 2026-01-03 00:57:33.874755 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.874762 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: ansible.module_utils.basic.AnsibleModule.fail_json() got multiple values for keyword argument 'changed' 2026-01-03 00:57:33.874782 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\n response.raise_for_status()\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.47/containers/5faa7e7a9a621ff6a45b3738f4d0daac506866aade2acf5677731cbe621d95f7/json\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/tmp/ansible_kolla_container_facts_payload_t4hsbp7w/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 251, in main\n File \"/tmp/ansible_kolla_container_facts_payload_t4hsbp7w/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 143, in get_containers\n File \"/usr/lib/python3/dist-packages/docker/models/resource.py\", line 47, in reload\n new_model = self.collection.get(self.id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/models/containers.py\", line 954, in get\n resp = self.client.api.inspect_container(container_id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/utils/decorators.py\", line 19, in wrapped\n return f(self, resource_id, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/api/container.py\", line 793, in inspect_container\n return self._result(\n ^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 281, in _result\n self._raise_for_status(response)\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\n raise create_api_error_from_http_exception(e) from e\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\n raise cls(e, response=response, explanation=explanation) from e\ndocker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.47/containers/5faa7e7a9a621ff6a45b3738f4d0daac506866aade2acf5677731cbe621d95f7/json: Not Found (\"No such container: 5faa7e7a9a621ff6a45b3738f4d0daac506866aade2acf5677731cbe621d95f7\")\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 107, in \n File \"\", line 99, in _ansiballz_main\n File \"\", line 47, in invoke_module\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_kolla_container_facts_payload_t4hsbp7w/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 259, in \n File \"/tmp/ansible_kolla_container_facts_payload_t4hsbp7w/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 254, in main\nTypeError: ansible.module_utils.basic.AnsibleModule.fail_json() got multiple values for keyword argument 'changed'\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 00:57:33.874797 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.874803 | orchestrator | 2026-01-03 00:57:33.874809 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-03 00:57:33.874816 | orchestrator | Saturday 03 January 2026 00:57:27 +0000 (0:00:04.274) 0:03:00.591 ****** 2026-01-03 00:57:33.874822 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874828 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874834 | orchestrator | 2026-01-03 00:57:33.874840 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-03 00:57:33.874846 | orchestrator | Saturday 03 January 2026 00:57:28 +0000 (0:00:01.738) 0:03:02.329 ****** 2026-01-03 00:57:33.874853 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874859 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874865 | orchestrator | 2026-01-03 00:57:33.874871 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-03 00:57:33.874877 | orchestrator | Saturday 03 January 2026 00:57:29 +0000 (0:00:00.442) 0:03:02.771 ****** 2026-01-03 00:57:33.874883 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.874890 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.874896 | orchestrator | 2026-01-03 00:57:33.874902 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-03 00:57:33.874908 | orchestrator | Saturday 03 January 2026 00:57:32 +0000 (0:00:03.079) 0:03:05.851 ****** 2026-01-03 00:57:33.874915 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.874921 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.874927 | orchestrator | 2026-01-03 00:57:33.874933 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:57:33.874940 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-03 00:57:33.874950 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=1  skipped=36  rescued=0 ignored=1  2026-01-03 00:57:33.874957 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-03 00:57:33.874967 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-03 00:57:33.874973 | orchestrator | 2026-01-03 00:57:33.874980 | orchestrator | 2026-01-03 00:57:33.874986 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:57:33.874992 | orchestrator | Saturday 03 January 2026 00:57:32 +0000 (0:00:00.146) 0:03:05.997 ****** 2026-01-03 00:57:33.875019 | orchestrator | =============================================================================== 2026-01-03 00:57:33.875028 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.06s 2026-01-03 00:57:33.875034 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 24.25s 2026-01-03 00:57:33.875041 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.84s 2026-01-03 00:57:33.875047 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2026-01-03 00:57:33.875057 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.55s 2026-01-03 00:57:33.875064 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.07s 2026-01-03 00:57:33.875070 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.60s 2026-01-03 00:57:33.875076 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.27s 2026-01-03 00:57:33.875084 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.06s 2026-01-03 00:57:33.875095 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.58s 2026-01-03 00:57:33.875112 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.08s 2026-01-03 00:57:33.875123 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.03s 2026-01-03 00:57:33.875133 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.03s 2026-01-03 00:57:33.875143 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.95s 2026-01-03 00:57:33.875154 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-01-03 00:57:33.875164 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.69s 2026-01-03 00:57:33.875174 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.62s 2026-01-03 00:57:33.875183 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.49s 2026-01-03 00:57:33.875193 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.34s 2026-01-03 00:57:33.875205 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.24s 2026-01-03 00:57:33.875215 | orchestrator | 2026-01-03 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:36.922277 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:36.922727 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:36.924067 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:36.924113 | orchestrator | 2026-01-03 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:39.966878 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:39.968800 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:39.972668 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:39.972721 | orchestrator | 2026-01-03 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:43.024461 | orchestrator | 2026-01-03 00:57:43 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:43.024838 | orchestrator | 2026-01-03 00:57:43 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:43.025917 | orchestrator | 2026-01-03 00:57:43 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:43.025943 | orchestrator | 2026-01-03 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:46.067830 | orchestrator | 2026-01-03 00:57:46 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:46.068405 | orchestrator | 2026-01-03 00:57:46 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:46.069928 | orchestrator | 2026-01-03 00:57:46 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:46.069974 | orchestrator | 2026-01-03 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:49.103692 | orchestrator | 2026-01-03 00:57:49 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:49.104300 | orchestrator | 2026-01-03 00:57:49 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:49.105027 | orchestrator | 2026-01-03 00:57:49 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:49.105081 | orchestrator | 2026-01-03 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:52.145674 | orchestrator | 2026-01-03 00:57:52 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:52.147022 | orchestrator | 2026-01-03 00:57:52 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:52.148792 | orchestrator | 2026-01-03 00:57:52 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:52.148841 | orchestrator | 2026-01-03 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:55.186785 | orchestrator | 2026-01-03 00:57:55 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:55.189493 | orchestrator | 2026-01-03 00:57:55 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:55.192426 | orchestrator | 2026-01-03 00:57:55 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:55.192483 | orchestrator | 2026-01-03 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:58.229882 | orchestrator | 2026-01-03 00:57:58 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:57:58.231550 | orchestrator | 2026-01-03 00:57:58 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:57:58.233562 | orchestrator | 2026-01-03 00:57:58 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:57:58.233620 | orchestrator | 2026-01-03 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:01.272041 | orchestrator | 2026-01-03 00:58:01 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:01.272373 | orchestrator | 2026-01-03 00:58:01 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:01.274881 | orchestrator | 2026-01-03 00:58:01 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:01.274935 | orchestrator | 2026-01-03 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:04.311323 | orchestrator | 2026-01-03 00:58:04 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:04.312322 | orchestrator | 2026-01-03 00:58:04 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:04.314275 | orchestrator | 2026-01-03 00:58:04 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:04.314313 | orchestrator | 2026-01-03 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:07.353008 | orchestrator | 2026-01-03 00:58:07 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:07.354264 | orchestrator | 2026-01-03 00:58:07 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:07.355917 | orchestrator | 2026-01-03 00:58:07 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:07.355964 | orchestrator | 2026-01-03 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:10.405414 | orchestrator | 2026-01-03 00:58:10 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:10.408389 | orchestrator | 2026-01-03 00:58:10 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:10.410600 | orchestrator | 2026-01-03 00:58:10 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:10.410676 | orchestrator | 2026-01-03 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:13.453389 | orchestrator | 2026-01-03 00:58:13 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:13.456438 | orchestrator | 2026-01-03 00:58:13 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:13.459292 | orchestrator | 2026-01-03 00:58:13 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:13.459367 | orchestrator | 2026-01-03 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:16.503520 | orchestrator | 2026-01-03 00:58:16 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:16.507082 | orchestrator | 2026-01-03 00:58:16 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:16.509234 | orchestrator | 2026-01-03 00:58:16 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:16.509913 | orchestrator | 2026-01-03 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:19.549005 | orchestrator | 2026-01-03 00:58:19 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:19.550985 | orchestrator | 2026-01-03 00:58:19 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:19.552577 | orchestrator | 2026-01-03 00:58:19 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:19.552645 | orchestrator | 2026-01-03 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:22.591026 | orchestrator | 2026-01-03 00:58:22 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:22.591096 | orchestrator | 2026-01-03 00:58:22 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:22.592014 | orchestrator | 2026-01-03 00:58:22 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:22.592066 | orchestrator | 2026-01-03 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:25.637355 | orchestrator | 2026-01-03 00:58:25 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:25.638656 | orchestrator | 2026-01-03 00:58:25 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:25.639951 | orchestrator | 2026-01-03 00:58:25 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:25.640005 | orchestrator | 2026-01-03 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:28.672617 | orchestrator | 2026-01-03 00:58:28 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:28.672709 | orchestrator | 2026-01-03 00:58:28 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:28.673249 | orchestrator | 2026-01-03 00:58:28 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:28.673301 | orchestrator | 2026-01-03 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:31.718754 | orchestrator | 2026-01-03 00:58:31 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:31.722445 | orchestrator | 2026-01-03 00:58:31 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:31.724764 | orchestrator | 2026-01-03 00:58:31 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:31.724840 | orchestrator | 2026-01-03 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:34.766469 | orchestrator | 2026-01-03 00:58:34 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:34.768484 | orchestrator | 2026-01-03 00:58:34 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:34.769826 | orchestrator | 2026-01-03 00:58:34 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:34.769866 | orchestrator | 2026-01-03 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:37.818185 | orchestrator | 2026-01-03 00:58:37 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:37.819458 | orchestrator | 2026-01-03 00:58:37 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state STARTED 2026-01-03 00:58:37.821165 | orchestrator | 2026-01-03 00:58:37 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:37.821211 | orchestrator | 2026-01-03 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:40.857522 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:40.857932 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:40.863875 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 7bb673d7-f17c-42c0-ae91-a16a5fe0077a is in state SUCCESS 2026-01-03 00:58:40.864742 | orchestrator | 2026-01-03 00:58:40.864790 | orchestrator | 2026-01-03 00:58:40.864797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:58:40.864803 | orchestrator | 2026-01-03 00:58:40.864807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:58:40.864812 | orchestrator | Saturday 03 January 2026 00:57:36 +0000 (0:00:00.260) 0:00:00.260 ****** 2026-01-03 00:58:40.864828 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:40.864834 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:40.864839 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:40.864865 | orchestrator | 2026-01-03 00:58:40.864871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:58:40.864877 | orchestrator | Saturday 03 January 2026 00:57:37 +0000 (0:00:00.287) 0:00:00.548 ****** 2026-01-03 00:58:40.864882 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-03 00:58:40.864889 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-03 00:58:40.864894 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-03 00:58:40.864900 | orchestrator | 2026-01-03 00:58:40.864906 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-03 00:58:40.864912 | orchestrator | 2026-01-03 00:58:40.864917 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:58:40.864923 | orchestrator | Saturday 03 January 2026 00:57:37 +0000 (0:00:00.407) 0:00:00.955 ****** 2026-01-03 00:58:40.864929 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:40.864936 | orchestrator | 2026-01-03 00:58:40.865042 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-03 00:58:40.865053 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:00.539) 0:00:01.495 ****** 2026-01-03 00:58:40.865142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865460 | orchestrator | 2026-01-03 00:58:40.865464 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-03 00:58:40.865475 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:01.909) 0:00:03.405 ****** 2026-01-03 00:58:40.865481 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.865489 | orchestrator | 2026-01-03 00:58:40.865494 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-03 00:58:40.865500 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.126) 0:00:03.531 ****** 2026-01-03 00:58:40.865510 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.865516 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.865522 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.865527 | orchestrator | 2026-01-03 00:58:40.865533 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-03 00:58:40.865539 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.437) 0:00:03.968 ****** 2026-01-03 00:58:40.865554 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:58:40.865561 | orchestrator | 2026-01-03 00:58:40.865618 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:58:40.865626 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.797) 0:00:04.766 ****** 2026-01-03 00:58:40.865633 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:40.865638 | orchestrator | 2026-01-03 00:58:40.865641 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-03 00:58:40.865645 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.498) 0:00:05.264 ****** 2026-01-03 00:58:40.865649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.865937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.865967 | orchestrator | 2026-01-03 00:58:40.865972 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-03 00:58:40.865976 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:03.104) 0:00:08.368 ****** 2026-01-03 00:58:40.865995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866009 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866082 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866124 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866130 | orchestrator | 2026-01-03 00:58:40.866137 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-03 00:58:40.866142 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.776) 0:00:09.144 ****** 2026-01-03 00:58:40.866149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866188 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866223 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866285 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866291 | orchestrator | 2026-01-03 00:58:40.866298 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-03 00:58:40.866304 | orchestrator | Saturday 03 January 2026 00:57:46 +0000 (0:00:00.728) 0:00:09.873 ****** 2026-01-03 00:58:40.866353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866434 | orchestrator | 2026-01-03 00:58:40.866438 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-03 00:58:40.866442 | orchestrator | Saturday 03 January 2026 00:57:49 +0000 (0:00:03.381) 0:00:13.254 ****** 2026-01-03 00:58:40.866446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.866489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.866509 | orchestrator | 2026-01-03 00:58:40.866513 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-03 00:58:40.866517 | orchestrator | Saturday 03 January 2026 00:57:55 +0000 (0:00:05.389) 0:00:18.643 ****** 2026-01-03 00:58:40.866521 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:40.866526 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:40.866529 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:40.866533 | orchestrator | 2026-01-03 00:58:40.866537 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-03 00:58:40.866541 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:01.318) 0:00:19.961 ****** 2026-01-03 00:58:40.866544 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866548 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866552 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866556 | orchestrator | 2026-01-03 00:58:40.866559 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-03 00:58:40.866575 | orchestrator | Saturday 03 January 2026 00:57:57 +0000 (0:00:00.664) 0:00:20.626 ****** 2026-01-03 00:58:40.866579 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866583 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866587 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866591 | orchestrator | 2026-01-03 00:58:40.866595 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-03 00:58:40.866599 | orchestrator | Saturday 03 January 2026 00:57:57 +0000 (0:00:00.319) 0:00:20.946 ****** 2026-01-03 00:58:40.866602 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866610 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866613 | orchestrator | 2026-01-03 00:58:40.866617 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-03 00:58:40.866625 | orchestrator | Saturday 03 January 2026 00:57:58 +0000 (0:00:00.557) 0:00:21.504 ****** 2026-01-03 00:58:40.866629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866697 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866734 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.866742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.866746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.866750 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866754 | orchestrator | 2026-01-03 00:58:40.866758 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:58:40.866762 | orchestrator | Saturday 03 January 2026 00:57:58 +0000 (0:00:00.570) 0:00:22.074 ****** 2026-01-03 00:58:40.866766 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866770 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866773 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866777 | orchestrator | 2026-01-03 00:58:40.866781 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-03 00:58:40.866785 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:00.283) 0:00:22.357 ****** 2026-01-03 00:58:40.866794 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:58:40.866809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:58:40.866814 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:58:40.866818 | orchestrator | 2026-01-03 00:58:40.866822 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-03 00:58:40.866828 | orchestrator | Saturday 03 January 2026 00:58:00 +0000 (0:00:01.703) 0:00:24.061 ****** 2026-01-03 00:58:40.866832 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:58:40.866836 | orchestrator | 2026-01-03 00:58:40.866840 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-03 00:58:40.866843 | orchestrator | Saturday 03 January 2026 00:58:01 +0000 (0:00:00.883) 0:00:24.944 ****** 2026-01-03 00:58:40.866847 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.866851 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.866855 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.866859 | orchestrator | 2026-01-03 00:58:40.866863 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-03 00:58:40.866866 | orchestrator | Saturday 03 January 2026 00:58:02 +0000 (0:00:00.781) 0:00:25.725 ****** 2026-01-03 00:58:40.866870 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:58:40.866874 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:58:40.866878 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:58:40.866882 | orchestrator | 2026-01-03 00:58:40.866886 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-03 00:58:40.866891 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:01.303) 0:00:27.029 ****** 2026-01-03 00:58:40.866894 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:40.866898 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:40.866902 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:40.866906 | orchestrator | 2026-01-03 00:58:40.866910 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-03 00:58:40.866914 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:00.278) 0:00:27.307 ****** 2026-01-03 00:58:40.866918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:58:40.866922 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:58:40.866926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:58:40.866930 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:58:40.866934 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:58:40.866937 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:58:40.866941 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:58:40.866945 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:58:40.866950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:58:40.866954 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:58:40.866958 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:58:40.866962 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:58:40.866966 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:58:40.866969 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:58:40.866976 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:58:40.866980 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:58:40.866984 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:58:40.866988 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:58:40.866992 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:58:40.866997 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:58:40.867001 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:58:40.867006 | orchestrator | 2026-01-03 00:58:40.867011 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-03 00:58:40.867015 | orchestrator | Saturday 03 January 2026 00:58:13 +0000 (0:00:09.174) 0:00:36.481 ****** 2026-01-03 00:58:40.867020 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:58:40.867024 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:58:40.867029 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:58:40.867033 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:58:40.867053 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:58:40.867058 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:58:40.867063 | orchestrator | 2026-01-03 00:58:40.867068 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-03 00:58:40.867075 | orchestrator | Saturday 03 January 2026 00:58:16 +0000 (0:00:02.854) 0:00:39.335 ****** 2026-01-03 00:58:40.867081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.867087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.867095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-03 00:58:40.867103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:58:40.867137 | orchestrator | 2026-01-03 00:58:40.867142 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-03 00:58:40.867146 | orchestrator | Saturday 03 January 2026 00:58:18 +0000 (0:00:02.573) 0:00:41.909 ****** 2026-01-03 00:58:40.867151 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:58:40.867155 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:58:40.867160 | orchestrator | } 2026-01-03 00:58:40.867164 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:58:40.867169 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:58:40.867174 | orchestrator | } 2026-01-03 00:58:40.867178 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:58:40.867182 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:58:40.867187 | orchestrator | } 2026-01-03 00:58:40.867192 | orchestrator | 2026-01-03 00:58:40.867196 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:58:40.867201 | orchestrator | Saturday 03 January 2026 00:58:18 +0000 (0:00:00.324) 0:00:42.234 ****** 2026-01-03 00:58:40.867214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.867221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.867268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.867281 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.867287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.867295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.867305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.867312 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.867322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-03 00:58:40.867329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:58:40.867341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:58:40.867348 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.867355 | orchestrator | 2026-01-03 00:58:40.867362 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:58:40.867367 | orchestrator | Saturday 03 January 2026 00:58:19 +0000 (0:00:00.874) 0:00:43.108 ****** 2026-01-03 00:58:40.867371 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.867375 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.867379 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.867383 | orchestrator | 2026-01-03 00:58:40.867386 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-03 00:58:40.867390 | orchestrator | Saturday 03 January 2026 00:58:20 +0000 (0:00:00.294) 0:00:43.403 ****** 2026-01-03 00:58:40.867394 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:40.867398 | orchestrator | 2026-01-03 00:58:40.867401 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-03 00:58:40.867405 | orchestrator | Saturday 03 January 2026 00:58:22 +0000 (0:00:02.519) 0:00:45.922 ****** 2026-01-03 00:58:40.867409 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:40.867413 | orchestrator | 2026-01-03 00:58:40.867417 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-03 00:58:40.867420 | orchestrator | Saturday 03 January 2026 00:58:25 +0000 (0:00:02.570) 0:00:48.493 ****** 2026-01-03 00:58:40.867424 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:40.867428 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:40.867432 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:40.867436 | orchestrator | 2026-01-03 00:58:40.867440 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-03 00:58:40.867444 | orchestrator | Saturday 03 January 2026 00:58:26 +0000 (0:00:00.963) 0:00:49.457 ****** 2026-01-03 00:58:40.867447 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:40.867451 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:40.867455 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:40.867459 | orchestrator | 2026-01-03 00:58:40.867462 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-03 00:58:40.867466 | orchestrator | Saturday 03 January 2026 00:58:26 +0000 (0:00:00.315) 0:00:49.773 ****** 2026-01-03 00:58:40.867470 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:40.867474 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:40.867478 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:40.867482 | orchestrator | 2026-01-03 00:58:40.867486 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-03 00:58:40.867493 | orchestrator | Saturday 03 January 2026 00:58:26 +0000 (0:00:00.515) 0:00:50.288 ****** 2026-01-03 00:58:40.867608 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\n2026-01-03 00:58:28.574 INFO Loading config file at /var/lib/kolla/config_files/config.json\n2026-01-03 00:58:28.575 INFO Validating config file\n2026-01-03 00:58:28.575 INFO Kolla config strategy set to: COPY_ALWAYS\n2026-01-03 00:58:28.580 INFO Copying service configuration files\n2026-01-03 00:58:28.581 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\n2026-01-03 00:58:28.589 INFO Setting permission for /usr/bin/keystone-startup.sh\n2026-01-03 00:58:28.590 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\n2026-01-03 00:58:28.590 INFO Setting permission for /etc/keystone/keystone.conf\n2026-01-03 00:58:28.590 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-03 00:58:28.599 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-03 00:58:28.599 INFO Creating directory /var/lib/kolla/share/ca-certificates\n2026-01-03 00:58:28.600 INFO Setting permission for /var/lib/kolla/share/ca-certificates\n2026-01-03 00:58:28.600 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-03 00:58:28.600 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-03 00:58:28.600 INFO Writing out command to execute\n2026-01-03 00:58:28.601 INFO Setting permission for /var/log/kolla\n2026-01-03 00:58:28.601 INFO Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-03 00:58:38.170 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397\n2026-01-03 00:58:38.175 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-03 00:58:38.175 1081 ERROR keystone Traceback (most recent call last):\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-03 00:58:38.175 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone return self.pool.connect()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-03 00:58:38.175 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-03 00:58:38.175 1081 ERROR keystone rec = pool._do_get()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-03 00:58:38.175 1081 ERROR keystone with util.safe_reraise():\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-03 00:58:38.175 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-03 00:58:38.175 1081 ERROR keystone return self._create_connection()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-03 00:58:38.175 1081 ERROR keystone self.__connect()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-03 00:58:38.175 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-03 00:58:38.175 1081 ERROR keystone self(*args, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-03 00:58:38.175 1081 ERROR keystone fn(*args, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-03 00:58:38.175 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-03 00:58:38.175 1081 ERROR keystone dialect.initialize(c)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-03 00:58:38.175 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-03 00:58:38.175 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-03 00:58:38.175 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-03 00:58:38.175 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-03 00:58:38.175 1081 ERROR keystone result = self._query(query)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-03 00:58:38.175 1081 ERROR keystone conn.query(q)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-03 00:58:38.175 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-03 00:58:38.175 1081 ERROR keystone result.read()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-03 00:58:38.175 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-03 00:58:38.175 1081 ERROR keystone packet.raise_for_error()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-03 00:58:38.175 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-03 00:58:38.175 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-03 00:58:38.175 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-03 00:58:38.175 1081 ERROR keystone \n2026-01-03 00:58:38.175 1081 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-03 00:58:38.175 1081 ERROR keystone \n2026-01-03 00:58:38.175 1081 ERROR keystone Traceback (most recent call last):\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-03 00:58:38.175 1081 ERROR keystone sys.exit(main())\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-03 00:58:38.175 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main\n2026-01-03 00:58:38.175 1081 ERROR keystone CONF.command.cmd_class.main()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main\n2026-01-03 00:58:38.175 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version\n2026-01-03 00:58:38.175 1081 ERROR keystone _db_sync(engine=engine)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync\n2026-01-03 00:58:38.175 1081 ERROR keystone with sql.session_for_write() as session:\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-03 00:58:38.175 1081 ERROR keystone return next(self.gen)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope\n2026-01-03 00:58:38.175 1081 ERROR keystone with current._produce_block(\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-03 00:58:38.175 1081 ERROR keystone return next(self.gen)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session\n2026-01-03 00:58:38.175 1081 ERROR keystone self.session = self.factory._create_session(\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session\n2026-01-03 00:58:38.175 1081 ERROR keystone self._start()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start\n2026-01-03 00:58:38.175 1081 ERROR keystone self._setup_for_connection(\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone engine = engines.create_engine(\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-03 00:58:38.175 1081 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine\n2026-01-03 00:58:38.175 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone conn = engine.connect()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect\n2026-01-03 00:58:38.175 1081 ERROR keystone return self._connection_cls(self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-03 00:58:38.175 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection\n2026-01-03 00:58:38.175 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-03 00:58:38.175 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone return self.pool.connect()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-03 00:58:38.175 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-03 00:58:38.175 1081 ERROR keystone rec = pool._do_get()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-03 00:58:38.175 1081 ERROR keystone with util.safe_reraise():\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-03 00:58:38.175 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-03 00:58:38.175 1081 ERROR keystone return self._create_connection()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-03 00:58:38.175 1081 ERROR keystone self.__connect()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-03 00:58:38.175 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-03 00:58:38.175 1081 ERROR keystone self(*args, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-03 00:58:38.175 1081 ERROR keystone fn(*args, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-03 00:58:38.175 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-03 00:58:38.175 1081 ERROR keystone dialect.initialize(c)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-03 00:58:38.175 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-03 00:58:38.175 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-03 00:58:38.175 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-03 00:58:38.175 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-03 00:58:38.175 1081 ERROR keystone result = self._query(query)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-03 00:58:38.175 1081 ERROR keystone conn.query(q)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-03 00:58:38.175 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-03 00:58:38.175 1081 ERROR keystone result.read()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-03 00:58:38.175 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-03 00:58:38.175 1081 ERROR keystone packet.raise_for_error()\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-03 00:58:38.175 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-03 00:58:38.175 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-03 00:58:38.175 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-03 00:58:38.175 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-03 00:58:38.175 1081 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "2026-01-03 00:58:28.574 INFO Loading config file at /var/lib/kolla/config_files/config.json", "2026-01-03 00:58:28.575 INFO Validating config file", "2026-01-03 00:58:28.575 INFO Kolla config strategy set to: COPY_ALWAYS", "2026-01-03 00:58:28.580 INFO Copying service configuration files", "2026-01-03 00:58:28.581 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "2026-01-03 00:58:28.589 INFO Setting permission for /usr/bin/keystone-startup.sh", "2026-01-03 00:58:28.590 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "2026-01-03 00:58:28.590 INFO Setting permission for /etc/keystone/keystone.conf", "2026-01-03 00:58:28.590 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-03 00:58:28.599 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-03 00:58:28.599 INFO Creating directory /var/lib/kolla/share/ca-certificates", "2026-01-03 00:58:28.600 INFO Setting permission for /var/lib/kolla/share/ca-certificates", "2026-01-03 00:58:28.600 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-03 00:58:28.600 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-03 00:58:28.600 INFO Writing out command to execute", "2026-01-03 00:58:28.601 INFO Setting permission for /var/log/kolla", "2026-01-03 00:58:28.601 INFO Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-03 00:58:38.170 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397", "2026-01-03 00:58:38.175 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-03 00:58:38.175 1081 ERROR keystone Traceback (most recent call last):", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-03 00:58:38.175 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone return self.pool.connect()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-03 00:58:38.175 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-03 00:58:38.175 1081 ERROR keystone rec = pool._do_get()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-03 00:58:38.175 1081 ERROR keystone with util.safe_reraise():", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-03 00:58:38.175 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-03 00:58:38.175 1081 ERROR keystone return self._create_connection()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-03 00:58:38.175 1081 ERROR keystone self.__connect()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-03 00:58:38.175 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-03 00:58:38.175 1081 ERROR keystone self(*args, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-03 00:58:38.175 1081 ERROR keystone fn(*args, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-03 00:58:38.175 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-03 00:58:38.175 1081 ERROR keystone dialect.initialize(c)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-03 00:58:38.175 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-03 00:58:38.175 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-03 00:58:38.175 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-03 00:58:38.175 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-03 00:58:38.175 1081 ERROR keystone result = self._query(query)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-03 00:58:38.175 1081 ERROR keystone conn.query(q)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-03 00:58:38.175 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-03 00:58:38.175 1081 ERROR keystone result.read()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-03 00:58:38.175 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-03 00:58:38.175 1081 ERROR keystone packet.raise_for_error()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-03 00:58:38.175 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-03 00:58:38.175 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-03 00:58:38.175 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-03 00:58:38.175 1081 ERROR keystone ", "2026-01-03 00:58:38.175 1081 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-03 00:58:38.175 1081 ERROR keystone ", "2026-01-03 00:58:38.175 1081 ERROR keystone Traceback (most recent call last):", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-03 00:58:38.175 1081 ERROR keystone sys.exit(main())", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-03 00:58:38.175 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main", "2026-01-03 00:58:38.175 1081 ERROR keystone CONF.command.cmd_class.main()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main", "2026-01-03 00:58:38.175 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version", "2026-01-03 00:58:38.175 1081 ERROR keystone _db_sync(engine=engine)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync", "2026-01-03 00:58:38.175 1081 ERROR keystone with sql.session_for_write() as session:", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-03 00:58:38.175 1081 ERROR keystone return next(self.gen)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope", "2026-01-03 00:58:38.175 1081 ERROR keystone with current._produce_block(", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-03 00:58:38.175 1081 ERROR keystone return next(self.gen)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session", "2026-01-03 00:58:38.175 1081 ERROR keystone self.session = self.factory._create_session(", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session", "2026-01-03 00:58:38.175 1081 ERROR keystone self._start()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start", "2026-01-03 00:58:38.175 1081 ERROR keystone self._setup_for_connection(", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone engine = engines.create_engine(", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-03 00:58:38.175 1081 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine", "2026-01-03 00:58:38.175 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone conn = engine.connect()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect", "2026-01-03 00:58:38.175 1081 ERROR keystone return self._connection_cls(self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-03 00:58:38.175 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection", "2026-01-03 00:58:38.175 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-03 00:58:38.175 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone return self.pool.connect()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-03 00:58:38.175 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-03 00:58:38.175 1081 ERROR keystone rec = pool._do_get()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-03 00:58:38.175 1081 ERROR keystone with util.safe_reraise():", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-03 00:58:38.175 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-03 00:58:38.175 1081 ERROR keystone return self._create_connection()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-03 00:58:38.175 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-03 00:58:38.175 1081 ERROR keystone self.__connect()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-03 00:58:38.175 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-03 00:58:38.175 1081 ERROR keystone self(*args, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-03 00:58:38.175 1081 ERROR keystone fn(*args, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-03 00:58:38.175 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-03 00:58:38.175 1081 ERROR keystone dialect.initialize(c)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-03 00:58:38.175 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-03 00:58:38.175 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-03 00:58:38.175 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-03 00:58:38.175 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-03 00:58:38.175 1081 ERROR keystone result = self._query(query)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-03 00:58:38.175 1081 ERROR keystone conn.query(q)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-03 00:58:38.175 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-03 00:58:38.175 1081 ERROR keystone result.read()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-03 00:58:38.175 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-03 00:58:38.175 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-03 00:58:38.175 1081 ERROR keystone packet.raise_for_error()", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-03 00:58:38.175 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-03 00:58:38.175 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-03 00:58:38.175 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-03 00:58:38.175 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-03 00:58:38.175 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-03 00:58:38.175 1081 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-03 00:58:40.867655 | orchestrator | 2026-01-03 00:58:40.867660 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:58:40.867665 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-01-03 00:58:40.867670 | orchestrator | testbed-node-1 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 00:58:40.867675 | orchestrator | testbed-node-2 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 00:58:40.867679 | orchestrator | 2026-01-03 00:58:40.867683 | orchestrator | 2026-01-03 00:58:40.867686 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:58:40.867690 | orchestrator | Saturday 03 January 2026 00:58:39 +0000 (0:00:12.394) 0:01:02.683 ****** 2026-01-03 00:58:40.867694 | orchestrator | =============================================================================== 2026-01-03 00:58:40.867698 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.39s 2026-01-03 00:58:40.867702 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.17s 2026-01-03 00:58:40.867705 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.39s 2026-01-03 00:58:40.867785 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2026-01-03 00:58:40.867791 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.10s 2026-01-03 00:58:40.867795 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.85s 2026-01-03 00:58:40.867798 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.57s 2026-01-03 00:58:40.867802 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.57s 2026-01-03 00:58:40.867806 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.52s 2026-01-03 00:58:40.867810 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2026-01-03 00:58:40.867813 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.70s 2026-01-03 00:58:40.867817 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.32s 2026-01-03 00:58:40.867821 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.30s 2026-01-03 00:58:40.867825 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 0.96s 2026-01-03 00:58:40.867828 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.88s 2026-01-03 00:58:40.867832 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.87s 2026-01-03 00:58:40.867836 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.80s 2026-01-03 00:58:40.867844 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.78s 2026-01-03 00:58:40.867847 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.78s 2026-01-03 00:58:40.867851 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.73s 2026-01-03 00:58:40.867855 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:40.867862 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:40.869178 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:40.869336 | orchestrator | 2026-01-03 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:43.917475 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:43.918878 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:43.924618 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:43.924676 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:43.925305 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:43.925324 | orchestrator | 2026-01-03 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:46.961107 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:46.962769 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:46.963804 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:46.965149 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:46.966784 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:46.967013 | orchestrator | 2026-01-03 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:50.008329 | orchestrator | 2026-01-03 00:58:50 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:50.010007 | orchestrator | 2026-01-03 00:58:50 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:50.011508 | orchestrator | 2026-01-03 00:58:50 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:50.012841 | orchestrator | 2026-01-03 00:58:50 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:50.014353 | orchestrator | 2026-01-03 00:58:50 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:50.014404 | orchestrator | 2026-01-03 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:53.062649 | orchestrator | 2026-01-03 00:58:53 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:53.064610 | orchestrator | 2026-01-03 00:58:53 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state STARTED 2026-01-03 00:58:53.066088 | orchestrator | 2026-01-03 00:58:53 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:53.067504 | orchestrator | 2026-01-03 00:58:53 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:53.069491 | orchestrator | 2026-01-03 00:58:53 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:53.069807 | orchestrator | 2026-01-03 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:56.120878 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:56.121876 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task 99d18e6a-b552-46e6-90bf-e2a9eb821ee1 is in state SUCCESS 2026-01-03 00:58:56.125191 | orchestrator | 2026-01-03 00:58:56.125261 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:58:56.125322 | orchestrator | 2.16.14 2026-01-03 00:58:56.125333 | orchestrator | 2026-01-03 00:58:56.125340 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-03 00:58:56.125348 | orchestrator | 2026-01-03 00:58:56.125355 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-03 00:58:56.125362 | orchestrator | Saturday 03 January 2026 00:56:49 +0000 (0:00:00.570) 0:00:00.570 ****** 2026-01-03 00:58:56.125369 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:58:56.125377 | orchestrator | 2026-01-03 00:58:56.125384 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-03 00:58:56.125391 | orchestrator | Saturday 03 January 2026 00:56:50 +0000 (0:00:00.604) 0:00:01.174 ****** 2026-01-03 00:58:56.125398 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125405 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125788 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.125797 | orchestrator | 2026-01-03 00:58:56.125804 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-03 00:58:56.125811 | orchestrator | Saturday 03 January 2026 00:56:50 +0000 (0:00:00.641) 0:00:01.816 ****** 2026-01-03 00:58:56.125817 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125823 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125830 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.125836 | orchestrator | 2026-01-03 00:58:56.125842 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-03 00:58:56.125848 | orchestrator | Saturday 03 January 2026 00:56:51 +0000 (0:00:00.263) 0:00:02.080 ****** 2026-01-03 00:58:56.125855 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125861 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125881 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.125887 | orchestrator | 2026-01-03 00:58:56.125893 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-03 00:58:56.125899 | orchestrator | Saturday 03 January 2026 00:56:51 +0000 (0:00:00.734) 0:00:02.814 ****** 2026-01-03 00:58:56.125905 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125911 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125918 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.125923 | orchestrator | 2026-01-03 00:58:56.125929 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-03 00:58:56.125935 | orchestrator | Saturday 03 January 2026 00:56:52 +0000 (0:00:00.286) 0:00:03.101 ****** 2026-01-03 00:58:56.125942 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125948 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125955 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.125962 | orchestrator | 2026-01-03 00:58:56.125968 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-03 00:58:56.125975 | orchestrator | Saturday 03 January 2026 00:56:52 +0000 (0:00:00.292) 0:00:03.393 ****** 2026-01-03 00:58:56.125983 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.125990 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.125997 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.126003 | orchestrator | 2026-01-03 00:58:56.126079 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-03 00:58:56.126158 | orchestrator | Saturday 03 January 2026 00:56:52 +0000 (0:00:00.291) 0:00:03.685 ****** 2026-01-03 00:58:56.126169 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126177 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.126184 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.126191 | orchestrator | 2026-01-03 00:58:56.126198 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-03 00:58:56.126205 | orchestrator | Saturday 03 January 2026 00:56:53 +0000 (0:00:00.466) 0:00:04.152 ****** 2026-01-03 00:58:56.126211 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.126218 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.126225 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.126232 | orchestrator | 2026-01-03 00:58:56.126238 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-03 00:58:56.126245 | orchestrator | Saturday 03 January 2026 00:56:53 +0000 (0:00:00.295) 0:00:04.447 ****** 2026-01-03 00:58:56.126251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:58:56.126257 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:58:56.126263 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:58:56.126269 | orchestrator | 2026-01-03 00:58:56.126297 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-03 00:58:56.126303 | orchestrator | Saturday 03 January 2026 00:56:54 +0000 (0:00:00.662) 0:00:05.110 ****** 2026-01-03 00:58:56.126509 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.126523 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.126530 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.126537 | orchestrator | 2026-01-03 00:58:56.126545 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-03 00:58:56.126553 | orchestrator | Saturday 03 January 2026 00:56:54 +0000 (0:00:00.414) 0:00:05.524 ****** 2026-01-03 00:58:56.126559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:58:56.126567 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:58:56.126573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:58:56.126579 | orchestrator | 2026-01-03 00:58:56.126586 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-03 00:58:56.126593 | orchestrator | Saturday 03 January 2026 00:56:56 +0000 (0:00:02.031) 0:00:07.556 ****** 2026-01-03 00:58:56.126600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:58:56.126607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:58:56.126613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:58:56.126620 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126626 | orchestrator | 2026-01-03 00:58:56.126672 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-03 00:58:56.126680 | orchestrator | Saturday 03 January 2026 00:56:57 +0000 (0:00:00.678) 0:00:08.235 ****** 2026-01-03 00:58:56.126688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126721 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126727 | orchestrator | 2026-01-03 00:58:56.126734 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-03 00:58:56.126740 | orchestrator | Saturday 03 January 2026 00:56:58 +0000 (0:00:00.799) 0:00:09.034 ****** 2026-01-03 00:58:56.126757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.126779 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126785 | orchestrator | 2026-01-03 00:58:56.126791 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-03 00:58:56.126797 | orchestrator | Saturday 03 January 2026 00:56:58 +0000 (0:00:00.333) 0:00:09.367 ****** 2026-01-03 00:58:56.126805 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '84b6e09911a3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-03 00:56:55.227941', 'end': '2026-01-03 00:56:55.258668', 'delta': '0:00:00.030727', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['84b6e09911a3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-03 00:58:56.126814 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '14db60bd5210', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-03 00:56:55.929585', 'end': '2026-01-03 00:56:55.959355', 'delta': '0:00:00.029770', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['14db60bd5210'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-03 00:58:56.126844 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1546c38ab47b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-03 00:56:56.452521', 'end': '2026-01-03 00:56:56.480667', 'delta': '0:00:00.028146', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1546c38ab47b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-03 00:58:56.126857 | orchestrator | 2026-01-03 00:58:56.126863 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-03 00:58:56.126870 | orchestrator | Saturday 03 January 2026 00:56:58 +0000 (0:00:00.194) 0:00:09.562 ****** 2026-01-03 00:58:56.126876 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.126882 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.126888 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.126893 | orchestrator | 2026-01-03 00:58:56.126900 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-03 00:58:56.126905 | orchestrator | Saturday 03 January 2026 00:56:59 +0000 (0:00:00.444) 0:00:10.006 ****** 2026-01-03 00:58:56.126912 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-03 00:58:56.126918 | orchestrator | 2026-01-03 00:58:56.126928 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-03 00:58:56.126934 | orchestrator | Saturday 03 January 2026 00:57:00 +0000 (0:00:01.647) 0:00:11.654 ****** 2026-01-03 00:58:56.126940 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126946 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.126952 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.126958 | orchestrator | 2026-01-03 00:58:56.126964 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-03 00:58:56.126970 | orchestrator | Saturday 03 January 2026 00:57:00 +0000 (0:00:00.274) 0:00:11.928 ****** 2026-01-03 00:58:56.126976 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.126983 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.126989 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.126996 | orchestrator | 2026-01-03 00:58:56.127003 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:58:56.127010 | orchestrator | Saturday 03 January 2026 00:57:01 +0000 (0:00:00.400) 0:00:12.329 ****** 2026-01-03 00:58:56.127016 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127022 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127028 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127034 | orchestrator | 2026-01-03 00:58:56.127040 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-03 00:58:56.127047 | orchestrator | Saturday 03 January 2026 00:57:01 +0000 (0:00:00.489) 0:00:12.818 ****** 2026-01-03 00:58:56.127053 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.127060 | orchestrator | 2026-01-03 00:58:56.127066 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-03 00:58:56.127073 | orchestrator | Saturday 03 January 2026 00:57:01 +0000 (0:00:00.117) 0:00:12.936 ****** 2026-01-03 00:58:56.127080 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127087 | orchestrator | 2026-01-03 00:58:56.127093 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:58:56.127100 | orchestrator | Saturday 03 January 2026 00:57:02 +0000 (0:00:00.239) 0:00:13.175 ****** 2026-01-03 00:58:56.127106 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127113 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127120 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127126 | orchestrator | 2026-01-03 00:58:56.127133 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-03 00:58:56.127141 | orchestrator | Saturday 03 January 2026 00:57:02 +0000 (0:00:00.276) 0:00:13.452 ****** 2026-01-03 00:58:56.127149 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127156 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127164 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127171 | orchestrator | 2026-01-03 00:58:56.127178 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-03 00:58:56.127186 | orchestrator | Saturday 03 January 2026 00:57:02 +0000 (0:00:00.292) 0:00:13.745 ****** 2026-01-03 00:58:56.127200 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127208 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127215 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127222 | orchestrator | 2026-01-03 00:58:56.127230 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-03 00:58:56.127238 | orchestrator | Saturday 03 January 2026 00:57:03 +0000 (0:00:00.473) 0:00:14.218 ****** 2026-01-03 00:58:56.127246 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127253 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127261 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127268 | orchestrator | 2026-01-03 00:58:56.127337 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-03 00:58:56.127346 | orchestrator | Saturday 03 January 2026 00:57:03 +0000 (0:00:00.325) 0:00:14.543 ****** 2026-01-03 00:58:56.127353 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127361 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127369 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127377 | orchestrator | 2026-01-03 00:58:56.127384 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-03 00:58:56.127392 | orchestrator | Saturday 03 January 2026 00:57:03 +0000 (0:00:00.318) 0:00:14.862 ****** 2026-01-03 00:58:56.127399 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127407 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127414 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127457 | orchestrator | 2026-01-03 00:58:56.127467 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-03 00:58:56.127476 | orchestrator | Saturday 03 January 2026 00:57:04 +0000 (0:00:00.307) 0:00:15.170 ****** 2026-01-03 00:58:56.127483 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127491 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127498 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.127504 | orchestrator | 2026-01-03 00:58:56.127511 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-03 00:58:56.127517 | orchestrator | Saturday 03 January 2026 00:57:04 +0000 (0:00:00.469) 0:00:15.639 ****** 2026-01-03 00:58:56.127525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7', 'dm-uuid-LVM-E0SLy0xxpfD6sTvVCIDPbqNc4GMCOCUptP94SpiYGE5vofYYlylLirpwuCLL2IIP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898', 'dm-uuid-LVM-V8Qk00zkomK0NL3Q4cqrm8tfvImB27p4tpR6HKkJ5iLRmvpxnNpbZjzV0CtdmwQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRWCS0-dcrg-y2sh-Oroo-Kq1m-UIyS-kyZoBl', 'scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338', 'scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DfzhTZ-p50D-CgcH-gVNP-0T9N-kPcG-1dOPE9', 'scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743', 'scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a', 'dm-uuid-LVM-NGHa1wUn8V350RlbQkJyBkV1rAqUU52v6nrYcdeahLIqO19Dbf8R3enPFwK8NgU9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25', 'scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e', 'dm-uuid-LVM-x0tcY9oMSmUzULFEhVgjmU1edzjHsa9qH2UeuFA78MnOtpX4Ju5rXgC9oBuuBBHY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127767 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.127774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f', 'dm-uuid-LVM-L3YoutWQquMEZSSYtKpK6iMm17YuKmdhxZDFI0w81VqyoVae0ofnrDxH7gZJ3y2m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt6dL-CeDZ-3hms-lPDz-CD85-4F34-gRqb46', 'scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04', 'scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282', 'dm-uuid-LVM-XigZQOTdcftuIUPTt9fZjIvpXyb1vJf2OL88b8i2lUQSeWGs78yAg3dsKPUiWBn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EoW2yZ-4Rbk-tIRq-J6CD-zI2A-8Kl3-8ohoyA', 'scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4', 'scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5', 'scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.127918 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.127928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:58:56.127988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.128002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3YK4j-1nNb-nWx2-VZ0W-SrCJ-Bt6D-C16i1e', 'scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0', 'scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.128009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lBuccY-K5SU-jpvV-AeFo-xoB9-n0WZ-HqUcnJ', 'scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c', 'scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.128017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd', 'scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.128030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:58:56.128036 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.128042 | orchestrator | 2026-01-03 00:58:56.128048 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-03 00:58:56.128055 | orchestrator | Saturday 03 January 2026 00:57:05 +0000 (0:00:00.504) 0:00:16.144 ****** 2026-01-03 00:58:56.128065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7', 'dm-uuid-LVM-E0SLy0xxpfD6sTvVCIDPbqNc4GMCOCUptP94SpiYGE5vofYYlylLirpwuCLL2IIP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898', 'dm-uuid-LVM-V8Qk00zkomK0NL3Q4cqrm8tfvImB27p4tpR6HKkJ5iLRmvpxnNpbZjzV0CtdmwQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a', 'dm-uuid-LVM-NGHa1wUn8V350RlbQkJyBkV1rAqUU52v6nrYcdeahLIqO19Dbf8R3enPFwK8NgU9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e', 'dm-uuid-LVM-x0tcY9oMSmUzULFEhVgjmU1edzjHsa9qH2UeuFA78MnOtpX4Ju5rXgC9oBuuBBHY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ebcfbce-5d6f-4157-9dc3-54fb70e0d4ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c38584cd--f033--5ed2--9691--83456ad614b7-osd--block--c38584cd--f033--5ed2--9691--83456ad614b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRWCS0-dcrg-y2sh-Oroo-Kq1m-UIyS-kyZoBl', 'scsi-0QEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338', 'scsi-SQEMU_QEMU_HARDDISK_2050ce1a-3081-4edd-a04d-3576bece8338'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898-osd--block--d5e4cbc2--7f45--5eff--bf2d--d06fd7ec5898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DfzhTZ-p50D-CgcH-gVNP-0T9N-kPcG-1dOPE9', 'scsi-0QEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743', 'scsi-SQEMU_QEMU_HARDDISK_deb598c2-f543-4f9b-b077-315ce19fa743'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128241 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25', 'scsi-SQEMU_QEMU_HARDDISK_f493d531-f14a-40ab-852d-4e184520cb25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128383 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.128390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa8bbc9-3af6-4fdc-bb55-92ea838a61b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85e74b82--cd6e--500e--9461--b867f1cfbb6a-osd--block--85e74b82--cd6e--500e--9461--b867f1cfbb6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt6dL-CeDZ-3hms-lPDz-CD85-4F34-gRqb46', 'scsi-0QEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04', 'scsi-SQEMU_QEMU_HARDDISK_c0ea832c-91ed-4e4f-b69a-de1dd6828a04'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f', 'dm-uuid-LVM-L3YoutWQquMEZSSYtKpK6iMm17YuKmdhxZDFI0w81VqyoVae0ofnrDxH7gZJ3y2m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1ae59360--fa3d--59bd--b3b8--51590acdfd6e-osd--block--1ae59360--fa3d--59bd--b3b8--51590acdfd6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EoW2yZ-4Rbk-tIRq-J6CD-zI2A-8Kl3-8ohoyA', 'scsi-0QEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4', 'scsi-SQEMU_QEMU_HARDDISK_92ee9088-f522-4da5-b9de-cc8e73fea3b4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282', 'dm-uuid-LVM-XigZQOTdcftuIUPTt9fZjIvpXyb1vJf2OL88b8i2lUQSeWGs78yAg3dsKPUiWBn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5', 'scsi-SQEMU_QEMU_HARDDISK_64f2fd4f-89e8-4ffa-8baf-bdc6a23cfca5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128497 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.128503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20e2a322-8c31-40eb-9f80-64f14276ce8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128575 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c0772612--0fc2--543a--b7cc--c9fc1cdd665f-osd--block--c0772612--0fc2--543a--b7cc--c9fc1cdd665f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3YK4j-1nNb-nWx2-VZ0W-SrCJ-Bt6D-C16i1e', 'scsi-0QEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0', 'scsi-SQEMU_QEMU_HARDDISK_18deaf14-926e-4cd7-8e92-2fabf4ecc6e0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--45670551--be8c--5463--bb13--3841732d7282-osd--block--45670551--be8c--5463--bb13--3841732d7282'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lBuccY-K5SU-jpvV-AeFo-xoB9-n0WZ-HqUcnJ', 'scsi-0QEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c', 'scsi-SQEMU_QEMU_HARDDISK_b0c096f4-c40f-4db0-bd86-40b4e9f72c6c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128589 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd', 'scsi-SQEMU_QEMU_HARDDISK_75764784-fbeb-447b-add5-f3485e6783bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128605 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:58:56.128611 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.128618 | orchestrator | 2026-01-03 00:58:56.128624 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-03 00:58:56.128632 | orchestrator | Saturday 03 January 2026 00:57:05 +0000 (0:00:00.510) 0:00:16.654 ****** 2026-01-03 00:58:56.128639 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.128645 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.128651 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.128657 | orchestrator | 2026-01-03 00:58:56.128663 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-03 00:58:56.128669 | orchestrator | Saturday 03 January 2026 00:57:06 +0000 (0:00:00.615) 0:00:17.269 ****** 2026-01-03 00:58:56.128675 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.128681 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.128687 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.128694 | orchestrator | 2026-01-03 00:58:56.128700 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:58:56.128706 | orchestrator | Saturday 03 January 2026 00:57:06 +0000 (0:00:00.317) 0:00:17.587 ****** 2026-01-03 00:58:56.128712 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.128719 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.128725 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.128730 | orchestrator | 2026-01-03 00:58:56.128741 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:58:56.128747 | orchestrator | Saturday 03 January 2026 00:57:07 +0000 (0:00:00.635) 0:00:18.223 ****** 2026-01-03 00:58:56.128753 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.128759 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.128766 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.128771 | orchestrator | 2026-01-03 00:58:56.128777 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:58:56.128784 | orchestrator | Saturday 03 January 2026 00:57:07 +0000 (0:00:00.277) 0:00:18.501 ****** 2026-01-03 00:58:56.128791 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.128797 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.128803 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.128809 | orchestrator | 2026-01-03 00:58:56.128815 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:58:56.128821 | orchestrator | Saturday 03 January 2026 00:57:07 +0000 (0:00:00.359) 0:00:18.860 ****** 2026-01-03 00:58:56.128826 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.128832 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.128838 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.128844 | orchestrator | 2026-01-03 00:58:56.128850 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-03 00:58:56.128858 | orchestrator | Saturday 03 January 2026 00:57:08 +0000 (0:00:00.433) 0:00:19.294 ****** 2026-01-03 00:58:56.128874 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-03 00:58:56.128891 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-03 00:58:56.128917 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-03 00:58:56.128927 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-03 00:58:56.128934 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-03 00:58:56.128941 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-03 00:58:56.128948 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-03 00:58:56.128956 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-03 00:58:56.128964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-03 00:58:56.128971 | orchestrator | 2026-01-03 00:58:56.128977 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-03 00:58:56.128984 | orchestrator | Saturday 03 January 2026 00:57:09 +0000 (0:00:00.727) 0:00:20.021 ****** 2026-01-03 00:58:56.128992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:58:56.129000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:58:56.129007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:58:56.129014 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:58:56.129028 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:58:56.129036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:58:56.129042 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.129048 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:58:56.129055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:58:56.129061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:58:56.129068 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.129074 | orchestrator | 2026-01-03 00:58:56.129080 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-03 00:58:56.129086 | orchestrator | Saturday 03 January 2026 00:57:09 +0000 (0:00:00.305) 0:00:20.327 ****** 2026-01-03 00:58:56.129093 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:58:56.129100 | orchestrator | 2026-01-03 00:58:56.129106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:58:56.129114 | orchestrator | Saturday 03 January 2026 00:57:09 +0000 (0:00:00.548) 0:00:20.875 ****** 2026-01-03 00:58:56.129128 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129135 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.129141 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.129147 | orchestrator | 2026-01-03 00:58:56.129154 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:58:56.129160 | orchestrator | Saturday 03 January 2026 00:57:10 +0000 (0:00:00.276) 0:00:21.152 ****** 2026-01-03 00:58:56.129166 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129173 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.129180 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.129186 | orchestrator | 2026-01-03 00:58:56.129192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:58:56.129199 | orchestrator | Saturday 03 January 2026 00:57:10 +0000 (0:00:00.289) 0:00:21.441 ****** 2026-01-03 00:58:56.129205 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129211 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.129218 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:58:56.129224 | orchestrator | 2026-01-03 00:58:56.129231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:58:56.129237 | orchestrator | Saturday 03 January 2026 00:57:10 +0000 (0:00:00.289) 0:00:21.731 ****** 2026-01-03 00:58:56.129243 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.129249 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.129261 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.129267 | orchestrator | 2026-01-03 00:58:56.129300 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:58:56.129308 | orchestrator | Saturday 03 January 2026 00:57:11 +0000 (0:00:00.554) 0:00:22.285 ****** 2026-01-03 00:58:56.129313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:58:56.129319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:58:56.129336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:58:56.129342 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129348 | orchestrator | 2026-01-03 00:58:56.129354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:58:56.129361 | orchestrator | Saturday 03 January 2026 00:57:11 +0000 (0:00:00.361) 0:00:22.646 ****** 2026-01-03 00:58:56.129367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:58:56.129373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:58:56.129379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:58:56.129385 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129390 | orchestrator | 2026-01-03 00:58:56.129397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:58:56.129403 | orchestrator | Saturday 03 January 2026 00:57:12 +0000 (0:00:00.371) 0:00:23.017 ****** 2026-01-03 00:58:56.129409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:58:56.129415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:58:56.129421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:58:56.129427 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129433 | orchestrator | 2026-01-03 00:58:56.129439 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:58:56.129446 | orchestrator | Saturday 03 January 2026 00:57:12 +0000 (0:00:00.363) 0:00:23.381 ****** 2026-01-03 00:58:56.129453 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:58:56.129459 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:58:56.129465 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:58:56.129471 | orchestrator | 2026-01-03 00:58:56.129478 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:58:56.129484 | orchestrator | Saturday 03 January 2026 00:57:12 +0000 (0:00:00.317) 0:00:23.698 ****** 2026-01-03 00:58:56.129490 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:58:56.129499 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:58:56.129506 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:58:56.129513 | orchestrator | 2026-01-03 00:58:56.129520 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-03 00:58:56.129526 | orchestrator | Saturday 03 January 2026 00:57:13 +0000 (0:00:00.504) 0:00:24.203 ****** 2026-01-03 00:58:56.129533 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:58:56.129541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:58:56.129547 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:58:56.129554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:58:56.129560 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:58:56.129566 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:58:56.129572 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:58:56.129578 | orchestrator | 2026-01-03 00:58:56.129584 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-03 00:58:56.129590 | orchestrator | Saturday 03 January 2026 00:57:14 +0000 (0:00:00.929) 0:00:25.132 ****** 2026-01-03 00:58:56.129605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:58:56.129611 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:58:56.129617 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:58:56.129622 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:58:56.129628 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:58:56.129634 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:58:56.129647 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:58:56.129653 | orchestrator | 2026-01-03 00:58:56.129659 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-03 00:58:56.129666 | orchestrator | Saturday 03 January 2026 00:57:16 +0000 (0:00:01.841) 0:00:26.974 ****** 2026-01-03 00:58:56.129672 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:58:56.129677 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:58:56.129682 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-03 00:58:56.129688 | orchestrator | 2026-01-03 00:58:56.129695 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-03 00:58:56.129701 | orchestrator | Saturday 03 January 2026 00:57:16 +0000 (0:00:00.363) 0:00:27.337 ****** 2026-01-03 00:58:56.129709 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:58:56.129717 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:58:56.129728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:58:56.129735 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:58:56.129741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:58:56.129747 | orchestrator | 2026-01-03 00:58:56.129752 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-03 00:58:56.129758 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:43.412) 0:01:10.750 ****** 2026-01-03 00:58:56.129765 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129777 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129789 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129807 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-03 00:58:56.129813 | orchestrator | 2026-01-03 00:58:56.129819 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-03 00:58:56.129825 | orchestrator | Saturday 03 January 2026 00:58:24 +0000 (0:00:24.554) 0:01:35.305 ****** 2026-01-03 00:58:56.129831 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129843 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129849 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129855 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129861 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129867 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:58:56.129873 | orchestrator | 2026-01-03 00:58:56.129878 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-03 00:58:56.129885 | orchestrator | Saturday 03 January 2026 00:58:36 +0000 (0:00:12.283) 0:01:47.588 ****** 2026-01-03 00:58:56.129891 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129897 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.129903 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.129909 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129915 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.129928 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.129934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129940 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.129945 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.129973 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129980 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.129986 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.129992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.129998 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.130004 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.130010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:58:56.130062 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:58:56.130069 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:58:56.130076 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-03 00:58:56.130084 | orchestrator | 2026-01-03 00:58:56.130095 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:58:56.130102 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-03 00:58:56.130111 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-03 00:58:56.130119 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-03 00:58:56.130135 | orchestrator | 2026-01-03 00:58:56.130143 | orchestrator | 2026-01-03 00:58:56.130150 | orchestrator | 2026-01-03 00:58:56.130158 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:58:56.130165 | orchestrator | Saturday 03 January 2026 00:58:54 +0000 (0:00:17.361) 0:02:04.949 ****** 2026-01-03 00:58:56.130172 | orchestrator | =============================================================================== 2026-01-03 00:58:56.130180 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.41s 2026-01-03 00:58:56.130187 | orchestrator | generate keys ---------------------------------------------------------- 24.55s 2026-01-03 00:58:56.130195 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.36s 2026-01-03 00:58:56.130203 | orchestrator | get keys from monitors ------------------------------------------------- 12.28s 2026-01-03 00:58:56.130210 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2026-01-03 00:58:56.130217 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.84s 2026-01-03 00:58:56.130224 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2026-01-03 00:58:56.130231 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2026-01-03 00:58:56.130238 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2026-01-03 00:58:56.130245 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-01-03 00:58:56.130252 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.73s 2026-01-03 00:58:56.130259 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.68s 2026-01-03 00:58:56.130267 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2026-01-03 00:58:56.130299 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2026-01-03 00:58:56.130305 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-01-03 00:58:56.130312 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-01-03 00:58:56.130317 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-01-03 00:58:56.130323 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.55s 2026-01-03 00:58:56.130329 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.55s 2026-01-03 00:58:56.130335 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.51s 2026-01-03 00:58:56.130340 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:56.130346 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:56.130353 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:58:56.130359 | orchestrator | 2026-01-03 00:58:56 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:56.130373 | orchestrator | 2026-01-03 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:59.170257 | orchestrator | 2026-01-03 00:58:59 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:58:59.173065 | orchestrator | 2026-01-03 00:58:59 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:58:59.174957 | orchestrator | 2026-01-03 00:58:59 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:58:59.176220 | orchestrator | 2026-01-03 00:58:59 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:58:59.177575 | orchestrator | 2026-01-03 00:58:59 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:58:59.177656 | orchestrator | 2026-01-03 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:02.221528 | orchestrator | 2026-01-03 00:59:02 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:02.223802 | orchestrator | 2026-01-03 00:59:02 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:02.225051 | orchestrator | 2026-01-03 00:59:02 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:59:02.226604 | orchestrator | 2026-01-03 00:59:02 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:02.230575 | orchestrator | 2026-01-03 00:59:02 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:02.230639 | orchestrator | 2026-01-03 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:05.278701 | orchestrator | 2026-01-03 00:59:05 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:05.279943 | orchestrator | 2026-01-03 00:59:05 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:05.281188 | orchestrator | 2026-01-03 00:59:05 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:59:05.282137 | orchestrator | 2026-01-03 00:59:05 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:05.283476 | orchestrator | 2026-01-03 00:59:05 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:05.283494 | orchestrator | 2026-01-03 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:08.326658 | orchestrator | 2026-01-03 00:59:08 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:08.329676 | orchestrator | 2026-01-03 00:59:08 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:08.332479 | orchestrator | 2026-01-03 00:59:08 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:59:08.334464 | orchestrator | 2026-01-03 00:59:08 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:08.336717 | orchestrator | 2026-01-03 00:59:08 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:08.336964 | orchestrator | 2026-01-03 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:11.389885 | orchestrator | 2026-01-03 00:59:11 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:11.394268 | orchestrator | 2026-01-03 00:59:11 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:11.397072 | orchestrator | 2026-01-03 00:59:11 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state STARTED 2026-01-03 00:59:11.399448 | orchestrator | 2026-01-03 00:59:11 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:11.401982 | orchestrator | 2026-01-03 00:59:11 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:11.402429 | orchestrator | 2026-01-03 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:14.451035 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:14.452031 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:14.454416 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:14.458204 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task 41c057ee-3c03-40bc-9d3e-5ff87ebdd950 is in state SUCCESS 2026-01-03 00:59:14.459240 | orchestrator | 2026-01-03 00:59:14.459294 | orchestrator | 2026-01-03 00:59:14.459303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:59:14.459310 | orchestrator | 2026-01-03 00:59:14.459316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:59:14.459386 | orchestrator | Saturday 03 January 2026 00:57:36 +0000 (0:00:00.245) 0:00:00.245 ****** 2026-01-03 00:59:14.459395 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.459403 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.459409 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.459416 | orchestrator | 2026-01-03 00:59:14.459423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:59:14.459430 | orchestrator | Saturday 03 January 2026 00:57:37 +0000 (0:00:00.273) 0:00:00.519 ****** 2026-01-03 00:59:14.459437 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-03 00:59:14.459444 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-03 00:59:14.459450 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-03 00:59:14.459454 | orchestrator | 2026-01-03 00:59:14.459458 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-03 00:59:14.459462 | orchestrator | 2026-01-03 00:59:14.459477 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 00:59:14.459481 | orchestrator | Saturday 03 January 2026 00:57:37 +0000 (0:00:00.429) 0:00:00.948 ****** 2026-01-03 00:59:14.459486 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:14.459496 | orchestrator | 2026-01-03 00:59:14.459501 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-03 00:59:14.459508 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:00.500) 0:00:01.449 ****** 2026-01-03 00:59:14.459537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.459588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.459598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.459611 | orchestrator | 2026-01-03 00:59:14.459618 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-03 00:59:14.459624 | orchestrator | Saturday 03 January 2026 00:57:39 +0000 (0:00:01.178) 0:00:02.627 ****** 2026-01-03 00:59:14.459630 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.459637 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.459644 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.459649 | orchestrator | 2026-01-03 00:59:14.459653 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 00:59:14.459660 | orchestrator | Saturday 03 January 2026 00:57:39 +0000 (0:00:00.412) 0:00:03.040 ****** 2026-01-03 00:59:14.459664 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 00:59:14.459670 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 00:59:14.460000 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 00:59:14.460012 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 00:59:14.460019 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 00:59:14.460026 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 00:59:14.460032 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-03 00:59:14.460039 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 00:59:14.460045 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 00:59:14.460051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 00:59:14.460058 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 00:59:14.460064 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 00:59:14.460069 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 00:59:14.460075 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 00:59:14.460088 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-03 00:59:14.460095 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 00:59:14.460101 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 00:59:14.460107 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 00:59:14.460111 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 00:59:14.460115 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 00:59:14.460119 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 00:59:14.460123 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 00:59:14.460127 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-03 00:59:14.460130 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 00:59:14.460135 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-03 00:59:14.460149 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-03 00:59:14.460153 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-03 00:59:14.460158 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-03 00:59:14.460164 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-03 00:59:14.460170 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-03 00:59:14.460175 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-03 00:59:14.460181 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-03 00:59:14.460188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-03 00:59:14.460195 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-03 00:59:14.460201 | orchestrator | 2026-01-03 00:59:14.460207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460214 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.718) 0:00:03.758 ****** 2026-01-03 00:59:14.460218 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460222 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460226 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460230 | orchestrator | 2026-01-03 00:59:14.460242 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460249 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.297) 0:00:04.056 ****** 2026-01-03 00:59:14.460255 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460261 | orchestrator | 2026-01-03 00:59:14.460266 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460272 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.126) 0:00:04.182 ****** 2026-01-03 00:59:14.460277 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460282 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460287 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460297 | orchestrator | 2026-01-03 00:59:14.460305 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460311 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.437) 0:00:04.619 ****** 2026-01-03 00:59:14.460316 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460322 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460351 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460358 | orchestrator | 2026-01-03 00:59:14.460364 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460370 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.298) 0:00:04.917 ****** 2026-01-03 00:59:14.460376 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460382 | orchestrator | 2026-01-03 00:59:14.460388 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460393 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.127) 0:00:05.045 ****** 2026-01-03 00:59:14.460405 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460411 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460417 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460422 | orchestrator | 2026-01-03 00:59:14.460428 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460434 | orchestrator | Saturday 03 January 2026 00:57:42 +0000 (0:00:00.280) 0:00:05.326 ****** 2026-01-03 00:59:14.460444 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460451 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460456 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460462 | orchestrator | 2026-01-03 00:59:14.460468 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460474 | orchestrator | Saturday 03 January 2026 00:57:42 +0000 (0:00:00.287) 0:00:05.614 ****** 2026-01-03 00:59:14.460480 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460486 | orchestrator | 2026-01-03 00:59:14.460492 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460499 | orchestrator | Saturday 03 January 2026 00:57:42 +0000 (0:00:00.307) 0:00:05.921 ****** 2026-01-03 00:59:14.460507 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460511 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460515 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460518 | orchestrator | 2026-01-03 00:59:14.460522 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460526 | orchestrator | Saturday 03 January 2026 00:57:42 +0000 (0:00:00.280) 0:00:06.202 ****** 2026-01-03 00:59:14.460530 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460533 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460537 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460541 | orchestrator | 2026-01-03 00:59:14.460545 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460549 | orchestrator | Saturday 03 January 2026 00:57:43 +0000 (0:00:00.312) 0:00:06.515 ****** 2026-01-03 00:59:14.460552 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460556 | orchestrator | 2026-01-03 00:59:14.460560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460564 | orchestrator | Saturday 03 January 2026 00:57:43 +0000 (0:00:00.149) 0:00:06.664 ****** 2026-01-03 00:59:14.460568 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460572 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460575 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460579 | orchestrator | 2026-01-03 00:59:14.460583 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460586 | orchestrator | Saturday 03 January 2026 00:57:43 +0000 (0:00:00.277) 0:00:06.942 ****** 2026-01-03 00:59:14.460590 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460594 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460598 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460602 | orchestrator | 2026-01-03 00:59:14.460605 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460609 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.452) 0:00:07.395 ****** 2026-01-03 00:59:14.460613 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460617 | orchestrator | 2026-01-03 00:59:14.460621 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460625 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.131) 0:00:07.527 ****** 2026-01-03 00:59:14.460628 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460632 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460636 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460639 | orchestrator | 2026-01-03 00:59:14.460643 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460647 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.279) 0:00:07.807 ****** 2026-01-03 00:59:14.460651 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460659 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460663 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460667 | orchestrator | 2026-01-03 00:59:14.460670 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460674 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.303) 0:00:08.110 ****** 2026-01-03 00:59:14.460678 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460682 | orchestrator | 2026-01-03 00:59:14.460686 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460690 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.134) 0:00:08.244 ****** 2026-01-03 00:59:14.460694 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460698 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460709 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460713 | orchestrator | 2026-01-03 00:59:14.460717 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460720 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.288) 0:00:08.532 ****** 2026-01-03 00:59:14.460724 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460728 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460732 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460735 | orchestrator | 2026-01-03 00:59:14.460739 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460743 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.538) 0:00:09.071 ****** 2026-01-03 00:59:14.460747 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460751 | orchestrator | 2026-01-03 00:59:14.460754 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460758 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.113) 0:00:09.185 ****** 2026-01-03 00:59:14.460762 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460766 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460769 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460773 | orchestrator | 2026-01-03 00:59:14.460777 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460781 | orchestrator | Saturday 03 January 2026 00:57:46 +0000 (0:00:00.282) 0:00:09.468 ****** 2026-01-03 00:59:14.460785 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460788 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460792 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460796 | orchestrator | 2026-01-03 00:59:14.460800 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460804 | orchestrator | Saturday 03 January 2026 00:57:46 +0000 (0:00:00.281) 0:00:09.749 ****** 2026-01-03 00:59:14.460808 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460812 | orchestrator | 2026-01-03 00:59:14.460816 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460823 | orchestrator | Saturday 03 January 2026 00:57:46 +0000 (0:00:00.130) 0:00:09.880 ****** 2026-01-03 00:59:14.460828 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460831 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460835 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460839 | orchestrator | 2026-01-03 00:59:14.460842 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460846 | orchestrator | Saturday 03 January 2026 00:57:47 +0000 (0:00:00.436) 0:00:10.316 ****** 2026-01-03 00:59:14.460850 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460854 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460858 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460861 | orchestrator | 2026-01-03 00:59:14.460865 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460869 | orchestrator | Saturday 03 January 2026 00:57:47 +0000 (0:00:00.297) 0:00:10.614 ****** 2026-01-03 00:59:14.460873 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460877 | orchestrator | 2026-01-03 00:59:14.460885 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460889 | orchestrator | Saturday 03 January 2026 00:57:47 +0000 (0:00:00.133) 0:00:10.747 ****** 2026-01-03 00:59:14.460893 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460896 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460900 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460904 | orchestrator | 2026-01-03 00:59:14.460908 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 00:59:14.460912 | orchestrator | Saturday 03 January 2026 00:57:47 +0000 (0:00:00.284) 0:00:11.032 ****** 2026-01-03 00:59:14.460915 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:14.460919 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:14.460923 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:14.460927 | orchestrator | 2026-01-03 00:59:14.460930 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 00:59:14.460934 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.308) 0:00:11.341 ****** 2026-01-03 00:59:14.460938 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460942 | orchestrator | 2026-01-03 00:59:14.460945 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 00:59:14.460949 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.118) 0:00:11.460 ****** 2026-01-03 00:59:14.460953 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.460957 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.460961 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.460964 | orchestrator | 2026-01-03 00:59:14.460968 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-03 00:59:14.460972 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.458) 0:00:11.918 ****** 2026-01-03 00:59:14.460976 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:14.460980 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:59:14.460983 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:59:14.460987 | orchestrator | 2026-01-03 00:59:14.460991 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-03 00:59:14.460995 | orchestrator | Saturday 03 January 2026 00:57:50 +0000 (0:00:01.642) 0:00:13.561 ****** 2026-01-03 00:59:14.460998 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 00:59:14.461017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 00:59:14.461021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 00:59:14.461025 | orchestrator | 2026-01-03 00:59:14.461029 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-03 00:59:14.461033 | orchestrator | Saturday 03 January 2026 00:57:52 +0000 (0:00:01.869) 0:00:15.430 ****** 2026-01-03 00:59:14.461037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 00:59:14.461041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 00:59:14.461049 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 00:59:14.461053 | orchestrator | 2026-01-03 00:59:14.461056 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-03 00:59:14.461060 | orchestrator | Saturday 03 January 2026 00:57:54 +0000 (0:00:02.120) 0:00:17.551 ****** 2026-01-03 00:59:14.461064 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 00:59:14.461068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 00:59:14.461072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 00:59:14.461075 | orchestrator | 2026-01-03 00:59:14.461079 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-03 00:59:14.461087 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:01.783) 0:00:19.334 ****** 2026-01-03 00:59:14.461090 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461094 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461098 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461102 | orchestrator | 2026-01-03 00:59:14.461105 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-03 00:59:14.461109 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:00.308) 0:00:19.643 ****** 2026-01-03 00:59:14.461113 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461117 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461121 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461124 | orchestrator | 2026-01-03 00:59:14.461128 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 00:59:14.461132 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:00.293) 0:00:19.937 ****** 2026-01-03 00:59:14.461144 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:14.461148 | orchestrator | 2026-01-03 00:59:14.461152 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-03 00:59:14.461156 | orchestrator | Saturday 03 January 2026 00:57:57 +0000 (0:00:00.817) 0:00:20.754 ****** 2026-01-03 00:59:14.461163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461196 | orchestrator | 2026-01-03 00:59:14.461200 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-03 00:59:14.461204 | orchestrator | Saturday 03 January 2026 00:57:58 +0000 (0:00:01.550) 0:00:22.305 ****** 2026-01-03 00:59:14.461208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461213 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461252 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461263 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461267 | orchestrator | 2026-01-03 00:59:14.461271 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-03 00:59:14.461275 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:00.702) 0:00:23.008 ****** 2026-01-03 00:59:14.461283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461291 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461301 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461323 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461376 | orchestrator | 2026-01-03 00:59:14.461381 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-03 00:59:14.461385 | orchestrator | Saturday 03 January 2026 00:58:00 +0000 (0:00:00.832) 0:00:23.840 ****** 2026-01-03 00:59:14.461389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:59:14.461424 | orchestrator | 2026-01-03 00:59:14.461428 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-03 00:59:14.461433 | orchestrator | Saturday 03 January 2026 00:58:02 +0000 (0:00:01.504) 0:00:25.345 ****** 2026-01-03 00:59:14.461437 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 00:59:14.461441 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:59:14.461556 | orchestrator | } 2026-01-03 00:59:14.461563 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 00:59:14.461567 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:59:14.461571 | orchestrator | } 2026-01-03 00:59:14.461575 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 00:59:14.461578 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 00:59:14.461583 | orchestrator | } 2026-01-03 00:59:14.461586 | orchestrator | 2026-01-03 00:59:14.461590 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 00:59:14.461594 | orchestrator | Saturday 03 January 2026 00:58:02 +0000 (0:00:00.347) 0:00:25.692 ****** 2026-01-03 00:59:14.461603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461614 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461628 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:59:14.461643 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461647 | orchestrator | 2026-01-03 00:59:14.461650 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 00:59:14.461654 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:01.106) 0:00:26.799 ****** 2026-01-03 00:59:14.461658 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:14.461662 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:14.461665 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:14.461669 | orchestrator | 2026-01-03 00:59:14.461673 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 00:59:14.461677 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:00.475) 0:00:27.275 ****** 2026-01-03 00:59:14.461681 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:14.461685 | orchestrator | 2026-01-03 00:59:14.461693 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-03 00:59:14.461697 | orchestrator | Saturday 03 January 2026 00:58:04 +0000 (0:00:00.516) 0:00:27.792 ****** 2026-01-03 00:59:14.461701 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:14.461704 | orchestrator | 2026-01-03 00:59:14.461708 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-03 00:59:14.461712 | orchestrator | Saturday 03 January 2026 00:58:07 +0000 (0:00:02.525) 0:00:30.317 ****** 2026-01-03 00:59:14.461716 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:14.461720 | orchestrator | 2026-01-03 00:59:14.461724 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-03 00:59:14.461727 | orchestrator | Saturday 03 January 2026 00:58:09 +0000 (0:00:02.482) 0:00:32.800 ****** 2026-01-03 00:59:14.461731 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:14.461735 | orchestrator | 2026-01-03 00:59:14.461739 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 00:59:14.461743 | orchestrator | Saturday 03 January 2026 00:58:27 +0000 (0:00:17.525) 0:00:50.326 ****** 2026-01-03 00:59:14.461746 | orchestrator | 2026-01-03 00:59:14.461750 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 00:59:14.461754 | orchestrator | Saturday 03 January 2026 00:58:27 +0000 (0:00:00.074) 0:00:50.401 ****** 2026-01-03 00:59:14.461758 | orchestrator | 2026-01-03 00:59:14.461761 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 00:59:14.461765 | orchestrator | Saturday 03 January 2026 00:58:27 +0000 (0:00:00.211) 0:00:50.612 ****** 2026-01-03 00:59:14.461769 | orchestrator | 2026-01-03 00:59:14.461773 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-03 00:59:14.461777 | orchestrator | Saturday 03 January 2026 00:58:27 +0000 (0:00:00.063) 0:00:50.676 ****** 2026-01-03 00:59:14.461781 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:14.461785 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:59:14.461789 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:59:14.461793 | orchestrator | 2026-01-03 00:59:14.461799 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:14.461803 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-03 00:59:14.461808 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-03 00:59:14.461815 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-03 00:59:14.461819 | orchestrator | 2026-01-03 00:59:14.461823 | orchestrator | 2026-01-03 00:59:14.461826 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:14.461830 | orchestrator | Saturday 03 January 2026 00:59:12 +0000 (0:00:44.916) 0:01:35.592 ****** 2026-01-03 00:59:14.461834 | orchestrator | =============================================================================== 2026-01-03 00:59:14.461838 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.92s 2026-01-03 00:59:14.461842 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.53s 2026-01-03 00:59:14.461845 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.53s 2026-01-03 00:59:14.461851 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.48s 2026-01-03 00:59:14.461857 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.12s 2026-01-03 00:59:14.461865 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2026-01-03 00:59:14.461874 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.78s 2026-01-03 00:59:14.461881 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.64s 2026-01-03 00:59:14.461887 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2026-01-03 00:59:14.461893 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.50s 2026-01-03 00:59:14.461900 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.18s 2026-01-03 00:59:14.461907 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-01-03 00:59:14.461913 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-01-03 00:59:14.461921 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-01-03 00:59:14.461941 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-01-03 00:59:14.461949 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-01-03 00:59:14.461955 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-03 00:59:14.461961 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-03 00:59:14.461965 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-01-03 00:59:14.461969 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2026-01-03 00:59:14.461972 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:14.461976 | orchestrator | 2026-01-03 00:59:14 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:14.461980 | orchestrator | 2026-01-03 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:17.518555 | orchestrator | 2026-01-03 00:59:17 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:17.519732 | orchestrator | 2026-01-03 00:59:17 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:17.522258 | orchestrator | 2026-01-03 00:59:17 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:17.523321 | orchestrator | 2026-01-03 00:59:17 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:17.524975 | orchestrator | 2026-01-03 00:59:17 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:17.525124 | orchestrator | 2026-01-03 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:20.580882 | orchestrator | 2026-01-03 00:59:20 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:20.588272 | orchestrator | 2026-01-03 00:59:20 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:20.590196 | orchestrator | 2026-01-03 00:59:20 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:20.592877 | orchestrator | 2026-01-03 00:59:20 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:20.595625 | orchestrator | 2026-01-03 00:59:20 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:20.596065 | orchestrator | 2026-01-03 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:23.648475 | orchestrator | 2026-01-03 00:59:23 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:23.650283 | orchestrator | 2026-01-03 00:59:23 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:23.652819 | orchestrator | 2026-01-03 00:59:23 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:23.655338 | orchestrator | 2026-01-03 00:59:23 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:23.656640 | orchestrator | 2026-01-03 00:59:23 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:23.656997 | orchestrator | 2026-01-03 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:26.709865 | orchestrator | 2026-01-03 00:59:26 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:26.711832 | orchestrator | 2026-01-03 00:59:26 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:26.711886 | orchestrator | 2026-01-03 00:59:26 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:26.712972 | orchestrator | 2026-01-03 00:59:26 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:26.713829 | orchestrator | 2026-01-03 00:59:26 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:26.713859 | orchestrator | 2026-01-03 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:29.747987 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:29.749526 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:29.751294 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:29.753691 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state STARTED 2026-01-03 00:59:29.755065 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:29.755109 | orchestrator | 2026-01-03 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:32.808087 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:32.811670 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:32.814889 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:32.817361 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:32.819504 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 3a5bd2ff-0080-4e6c-b47b-9343e3fb6cd6 is in state SUCCESS 2026-01-03 00:59:32.821288 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:32.821561 | orchestrator | 2026-01-03 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:35.877683 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:35.878678 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:35.880522 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:35.881529 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:35.882697 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:35.882727 | orchestrator | 2026-01-03 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:38.933851 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:38.936576 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:38.938418 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:38.940012 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:38.941881 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:38.941912 | orchestrator | 2026-01-03 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:41.992944 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:41.995274 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:41.997628 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:41.999597 | orchestrator | 2026-01-03 00:59:42 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:42.001253 | orchestrator | 2026-01-03 00:59:42 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:42.001329 | orchestrator | 2026-01-03 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:45.044138 | orchestrator | 2026-01-03 00:59:45 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:45.045773 | orchestrator | 2026-01-03 00:59:45 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:45.047223 | orchestrator | 2026-01-03 00:59:45 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:45.048903 | orchestrator | 2026-01-03 00:59:45 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:45.049955 | orchestrator | 2026-01-03 00:59:45 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:45.050000 | orchestrator | 2026-01-03 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:48.089751 | orchestrator | 2026-01-03 00:59:48 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state STARTED 2026-01-03 00:59:48.090680 | orchestrator | 2026-01-03 00:59:48 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:48.091711 | orchestrator | 2026-01-03 00:59:48 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:48.093025 | orchestrator | 2026-01-03 00:59:48 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:48.093688 | orchestrator | 2026-01-03 00:59:48 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state STARTED 2026-01-03 00:59:48.094112 | orchestrator | 2026-01-03 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:51.143161 | orchestrator | 2026-01-03 00:59:51.143206 | orchestrator | 2026-01-03 00:59:51.143211 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-03 00:59:51.143215 | orchestrator | 2026-01-03 00:59:51.143218 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-03 00:59:51.143221 | orchestrator | Saturday 03 January 2026 00:58:58 +0000 (0:00:00.149) 0:00:00.149 ****** 2026-01-03 00:59:51.143225 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-03 00:59:51.143228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143232 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143235 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 00:59:51.143238 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143241 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-03 00:59:51.143244 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-03 00:59:51.143248 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-03 00:59:51.143251 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-03 00:59:51.143254 | orchestrator | 2026-01-03 00:59:51.143257 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-03 00:59:51.143260 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:04.886) 0:00:05.036 ****** 2026-01-03 00:59:51.143263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-03 00:59:51.143266 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143269 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143272 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 00:59:51.143283 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143287 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-03 00:59:51.143290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-03 00:59:51.143293 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-03 00:59:51.143296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-03 00:59:51.143299 | orchestrator | 2026-01-03 00:59:51.143302 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-03 00:59:51.143305 | orchestrator | Saturday 03 January 2026 00:59:07 +0000 (0:00:04.166) 0:00:09.202 ****** 2026-01-03 00:59:51.143308 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:59:51.143320 | orchestrator | 2026-01-03 00:59:51.143324 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-03 00:59:51.143327 | orchestrator | Saturday 03 January 2026 00:59:08 +0000 (0:00:00.927) 0:00:10.129 ****** 2026-01-03 00:59:51.143330 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-03 00:59:51.143333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143336 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143340 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 00:59:51.143343 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143346 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-03 00:59:51.143349 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-03 00:59:51.143352 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-03 00:59:51.143355 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-03 00:59:51.143358 | orchestrator | 2026-01-03 00:59:51.143361 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-03 00:59:51.143364 | orchestrator | Saturday 03 January 2026 00:59:21 +0000 (0:00:12.707) 0:00:22.837 ****** 2026-01-03 00:59:51.143368 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-03 00:59:51.143371 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-03 00:59:51.143374 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-03 00:59:51.143377 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-03 00:59:51.143387 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-03 00:59:51.143390 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-03 00:59:51.143394 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-03 00:59:51.143396 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-03 00:59:51.143400 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-03 00:59:51.143403 | orchestrator | 2026-01-03 00:59:51.143406 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-03 00:59:51.143409 | orchestrator | Saturday 03 January 2026 00:59:24 +0000 (0:00:02.962) 0:00:25.800 ****** 2026-01-03 00:59:51.143412 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-03 00:59:51.143415 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143418 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143421 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 00:59:51.143438 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 00:59:51.143441 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-03 00:59:51.143444 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-03 00:59:51.143447 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-03 00:59:51.143450 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-03 00:59:51.143453 | orchestrator | 2026-01-03 00:59:51.143456 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:51.143462 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.143466 | orchestrator | 2026-01-03 00:59:51.143469 | orchestrator | 2026-01-03 00:59:51.143472 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:51.143475 | orchestrator | Saturday 03 January 2026 00:59:30 +0000 (0:00:06.649) 0:00:32.450 ****** 2026-01-03 00:59:51.143478 | orchestrator | =============================================================================== 2026-01-03 00:59:51.143483 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.71s 2026-01-03 00:59:51.143487 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.65s 2026-01-03 00:59:51.143490 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.89s 2026-01-03 00:59:51.143493 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2026-01-03 00:59:51.143496 | orchestrator | Check if target directories exist --------------------------------------- 2.96s 2026-01-03 00:59:51.143499 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2026-01-03 00:59:51.143502 | orchestrator | 2026-01-03 00:59:51.143505 | orchestrator | 2026-01-03 00:59:51.143508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:59:51.143511 | orchestrator | 2026-01-03 00:59:51.143514 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:59:51.143517 | orchestrator | Saturday 03 January 2026 00:58:43 +0000 (0:00:00.229) 0:00:00.229 ****** 2026-01-03 00:59:51.143520 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:51.143523 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:51.143526 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:51.143529 | orchestrator | 2026-01-03 00:59:51.143532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:59:51.143535 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.253) 0:00:00.483 ****** 2026-01-03 00:59:51.143539 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-03 00:59:51.143542 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-03 00:59:51.143545 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-03 00:59:51.143548 | orchestrator | 2026-01-03 00:59:51.143552 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-03 00:59:51.143555 | orchestrator | 2026-01-03 00:59:51.143558 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-03 00:59:51.143561 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.342) 0:00:00.826 ****** 2026-01-03 00:59:51.143564 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:51.143567 | orchestrator | 2026-01-03 00:59:51.143570 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-03 00:59:51.143573 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.478) 0:00:01.304 ****** 2026-01-03 00:59:51.143576 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-01-03 00:59:51.143579 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-01-03 00:59:51.143582 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-01-03 00:59:51.143590 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-01-03 00:59:51.143593 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-01-03 00:59:51.143614 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767401988.1069613-3284-30737165820676/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767401988.1069613-3284-30737165820676/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767401988.1069613-3284-30737165820676/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bzkrw7by/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bzkrw7by/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bzkrw7by/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bzkrw7by/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bzkrw7by/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 00:59:51.143624 | orchestrator | 2026-01-03 00:59:51.143627 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:51.143630 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.143634 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.143637 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.143640 | orchestrator | 2026-01-03 00:59:51.143644 | orchestrator | 2026-01-03 00:59:51.143647 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:51.143650 | orchestrator | Saturday 03 January 2026 00:59:49 +0000 (0:01:04.170) 0:01:05.475 ****** 2026-01-03 00:59:51.143653 | orchestrator | =============================================================================== 2026-01-03 00:59:51.143656 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 64.17s 2026-01-03 00:59:51.143659 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.48s 2026-01-03 00:59:51.143662 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-01-03 00:59:51.143665 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-01-03 00:59:51.143668 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task dd16cd92-4ba7-479b-9b44-63681e0e3f35 is in state SUCCESS 2026-01-03 00:59:51.144149 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:51.145586 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 00:59:51.146826 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 00:59:51.147970 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:51.149179 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:51.151539 | orchestrator | 2026-01-03 00:59:51.151561 | orchestrator | 2026-01-03 00:59:51.151565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:59:51.151569 | orchestrator | 2026-01-03 00:59:51.151573 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:59:51.151576 | orchestrator | Saturday 03 January 2026 00:58:43 +0000 (0:00:00.186) 0:00:00.186 ****** 2026-01-03 00:59:51.151579 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:51.151583 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:51.151586 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:51.151589 | orchestrator | 2026-01-03 00:59:51.151592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:59:51.151595 | orchestrator | Saturday 03 January 2026 00:58:43 +0000 (0:00:00.222) 0:00:00.408 ****** 2026-01-03 00:59:51.151599 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-03 00:59:51.151602 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-03 00:59:51.151605 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-03 00:59:51.151608 | orchestrator | 2026-01-03 00:59:51.151611 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-03 00:59:51.151614 | orchestrator | 2026-01-03 00:59:51.151617 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-03 00:59:51.151620 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.349) 0:00:00.758 ****** 2026-01-03 00:59:51.151624 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:51.151627 | orchestrator | 2026-01-03 00:59:51.151630 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-03 00:59:51.151633 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.422) 0:00:01.181 ****** 2026-01-03 00:59:51.151636 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-01-03 00:59:51.151639 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-01-03 00:59:51.151642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-01-03 00:59:51.151645 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-01-03 00:59:51.151648 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-01-03 00:59:51.151671 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767401987.8186424-3265-15088394398241/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767401987.8186424-3265-15088394398241/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767401987.8186424-3265-15088394398241/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pmovo4cf/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pmovo4cf/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pmovo4cf/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pmovo4cf/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pmovo4cf/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 00:59:51.151686 | orchestrator | 2026-01-03 00:59:51.151691 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:51.151694 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.151699 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.151703 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:59:51.151706 | orchestrator | 2026-01-03 00:59:51.151709 | orchestrator | 2026-01-03 00:59:51.151712 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:51.151715 | orchestrator | Saturday 03 January 2026 00:59:48 +0000 (0:01:04.149) 0:01:05.331 ****** 2026-01-03 00:59:51.151718 | orchestrator | =============================================================================== 2026-01-03 00:59:51.151722 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 64.15s 2026-01-03 00:59:51.151725 | orchestrator | designate : include_tasks ----------------------------------------------- 0.42s 2026-01-03 00:59:51.151728 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-01-03 00:59:51.151731 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-01-03 00:59:51.151734 | orchestrator | 2026-01-03 00:59:51 | INFO  | Task 28f3b550-d646-4431-9ba5-4c2ae0a31358 is in state SUCCESS 2026-01-03 00:59:51.151737 | orchestrator | 2026-01-03 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:54.288567 | orchestrator | 2026-01-03 00:59:54 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:54.294764 | orchestrator | 2026-01-03 00:59:54 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 00:59:54.301636 | orchestrator | 2026-01-03 00:59:54 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 00:59:54.304986 | orchestrator | 2026-01-03 00:59:54 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:54.306663 | orchestrator | 2026-01-03 00:59:54 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state STARTED 2026-01-03 00:59:54.306699 | orchestrator | 2026-01-03 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:57.350292 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 00:59:57.352824 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 00:59:57.353918 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 00:59:57.356047 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 00:59:57.356971 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 00:59:57.358430 | orchestrator | 2026-01-03 00:59:57 | INFO  | Task 59b8fe13-768a-4cc4-b35e-454dc69ec66c is in state SUCCESS 2026-01-03 00:59:57.358798 | orchestrator | 2026-01-03 00:59:57.358815 | orchestrator | 2026-01-03 00:59:57.358820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:59:57.358825 | orchestrator | 2026-01-03 00:59:57.358830 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:59:57.358835 | orchestrator | Saturday 03 January 2026 00:58:43 +0000 (0:00:00.231) 0:00:00.231 ****** 2026-01-03 00:59:57.358840 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:57.358845 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:57.358850 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:57.358855 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:57.358859 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:57.358864 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:57.358868 | orchestrator | 2026-01-03 00:59:57.358873 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:59:57.358878 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.619) 0:00:00.851 ****** 2026-01-03 00:59:57.358882 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-03 00:59:57.358887 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-03 00:59:57.358891 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-03 00:59:57.358896 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-03 00:59:57.358901 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-03 00:59:57.358906 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-03 00:59:57.358912 | orchestrator | 2026-01-03 00:59:57.358920 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-03 00:59:57.358927 | orchestrator | 2026-01-03 00:59:57.358935 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-03 00:59:57.358943 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:00.449) 0:00:01.300 ****** 2026-01-03 00:59:57.358951 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:59:57.358960 | orchestrator | 2026-01-03 00:59:57.358969 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-03 00:59:57.358977 | orchestrator | Saturday 03 January 2026 00:58:45 +0000 (0:00:00.833) 0:00:02.133 ****** 2026-01-03 00:59:57.358982 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:57.358987 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:57.358991 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:57.358996 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:57.359000 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:57.359005 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:57.359009 | orchestrator | 2026-01-03 00:59:57.359014 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-03 00:59:57.359018 | orchestrator | Saturday 03 January 2026 00:58:46 +0000 (0:00:01.061) 0:00:03.195 ****** 2026-01-03 00:59:57.359023 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:57.359027 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:57.359039 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:57.359050 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:57.359061 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:57.359068 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:57.359075 | orchestrator | 2026-01-03 00:59:57.359097 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-03 00:59:57.359104 | orchestrator | Saturday 03 January 2026 00:58:47 +0000 (0:00:01.038) 0:00:04.233 ****** 2026-01-03 00:59:57.359111 | orchestrator | ok: [testbed-node-0] => { 2026-01-03 00:59:57.359119 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359126 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359133 | orchestrator | } 2026-01-03 00:59:57.359141 | orchestrator | ok: [testbed-node-1] => { 2026-01-03 00:59:57.359148 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359155 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359162 | orchestrator | } 2026-01-03 00:59:57.359170 | orchestrator | ok: [testbed-node-2] => { 2026-01-03 00:59:57.359177 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359186 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359193 | orchestrator | } 2026-01-03 00:59:57.359201 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:59:57.359209 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359217 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359223 | orchestrator | } 2026-01-03 00:59:57.359228 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:59:57.359233 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359237 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359243 | orchestrator | } 2026-01-03 00:59:57.359248 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:59:57.359252 | orchestrator |  "changed": false, 2026-01-03 00:59:57.359258 | orchestrator |  "msg": "All assertions passed" 2026-01-03 00:59:57.359263 | orchestrator | } 2026-01-03 00:59:57.359268 | orchestrator | 2026-01-03 00:59:57.359273 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-03 00:59:57.359278 | orchestrator | Saturday 03 January 2026 00:58:48 +0000 (0:00:00.646) 0:00:04.880 ****** 2026-01-03 00:59:57.359296 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:57.359301 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:57.359306 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:57.359311 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:57.359316 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:57.359321 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:57.359326 | orchestrator | 2026-01-03 00:59:57.359332 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-03 00:59:57.359337 | orchestrator | Saturday 03 January 2026 00:58:49 +0000 (0:00:00.497) 0:00:05.377 ****** 2026-01-03 00:59:57.359342 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-01-03 00:59:57.359348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-01-03 00:59:57.359353 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-01-03 00:59:57.359358 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-01-03 00:59:57.359363 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-01-03 00:59:57.359390 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767401994.428985-3325-249400417188176/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767401994.428985-3325-249400417188176/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767401994.428985-3325-249400417188176/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_o09at_65/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_o09at_65/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_o09at_65/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_o09at_65/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_o09at_65/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 00:59:57.359406 | orchestrator | 2026-01-03 00:59:57.359412 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:57.359417 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359423 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359429 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359436 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359458 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359467 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:59:57.359473 | orchestrator | 2026-01-03 00:59:57.359480 | orchestrator | 2026-01-03 00:59:57.359487 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:57.359496 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:01:06.545) 0:01:11.923 ****** 2026-01-03 00:59:57.359508 | orchestrator | =============================================================================== 2026-01-03 00:59:57.359518 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 66.55s 2026-01-03 00:59:57.359527 | orchestrator | neutron : Get container facts ------------------------------------------- 1.06s 2026-01-03 00:59:57.359535 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.04s 2026-01-03 00:59:57.359544 | orchestrator | neutron : include_tasks ------------------------------------------------- 0.83s 2026-01-03 00:59:57.359552 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.65s 2026-01-03 00:59:57.359561 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2026-01-03 00:59:57.359575 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.50s 2026-01-03 00:59:57.359584 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-01-03 00:59:57.359593 | orchestrator | 2026-01-03 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:00.413364 | orchestrator | 2026-01-03 01:00:00 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:00.417286 | orchestrator | 2026-01-03 01:00:00 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:00.420583 | orchestrator | 2026-01-03 01:00:00 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:00.423056 | orchestrator | 2026-01-03 01:00:00 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:00.426109 | orchestrator | 2026-01-03 01:00:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:00.426193 | orchestrator | 2026-01-03 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:03.471767 | orchestrator | 2026-01-03 01:00:03 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:03.473942 | orchestrator | 2026-01-03 01:00:03 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:03.476555 | orchestrator | 2026-01-03 01:00:03 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:03.479157 | orchestrator | 2026-01-03 01:00:03 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:03.480897 | orchestrator | 2026-01-03 01:00:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:03.481119 | orchestrator | 2026-01-03 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:06.530669 | orchestrator | 2026-01-03 01:00:06 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:06.532436 | orchestrator | 2026-01-03 01:00:06 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:06.534256 | orchestrator | 2026-01-03 01:00:06 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:06.536207 | orchestrator | 2026-01-03 01:00:06 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:06.538349 | orchestrator | 2026-01-03 01:00:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:06.538684 | orchestrator | 2026-01-03 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:09.582660 | orchestrator | 2026-01-03 01:00:09 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:09.585437 | orchestrator | 2026-01-03 01:00:09 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:09.588060 | orchestrator | 2026-01-03 01:00:09 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:09.590809 | orchestrator | 2026-01-03 01:00:09 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:09.592224 | orchestrator | 2026-01-03 01:00:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:09.592270 | orchestrator | 2026-01-03 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:12.635247 | orchestrator | 2026-01-03 01:00:12 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:12.637899 | orchestrator | 2026-01-03 01:00:12 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:12.639898 | orchestrator | 2026-01-03 01:00:12 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:12.641881 | orchestrator | 2026-01-03 01:00:12 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:12.643747 | orchestrator | 2026-01-03 01:00:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:12.643784 | orchestrator | 2026-01-03 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:15.695151 | orchestrator | 2026-01-03 01:00:15 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:15.698055 | orchestrator | 2026-01-03 01:00:15 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:15.700038 | orchestrator | 2026-01-03 01:00:15 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:15.702114 | orchestrator | 2026-01-03 01:00:15 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:15.703517 | orchestrator | 2026-01-03 01:00:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:15.703544 | orchestrator | 2026-01-03 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:18.743046 | orchestrator | 2026-01-03 01:00:18 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:18.744868 | orchestrator | 2026-01-03 01:00:18 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:18.747249 | orchestrator | 2026-01-03 01:00:18 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:18.749207 | orchestrator | 2026-01-03 01:00:18 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state STARTED 2026-01-03 01:00:18.751402 | orchestrator | 2026-01-03 01:00:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:18.751849 | orchestrator | 2026-01-03 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:21.804726 | orchestrator | 2026-01-03 01:00:21 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:21.805964 | orchestrator | 2026-01-03 01:00:21 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:21.807761 | orchestrator | 2026-01-03 01:00:21 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:21.809098 | orchestrator | 2026-01-03 01:00:21 | INFO  | Task 8a4ccb14-5583-4040-939e-f58397d39c17 is in state SUCCESS 2026-01-03 01:00:21.810757 | orchestrator | 2026-01-03 01:00:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:21.810795 | orchestrator | 2026-01-03 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:24.863298 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state STARTED 2026-01-03 01:00:24.867322 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:24.870042 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:24.872939 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:24.872983 | orchestrator | 2026-01-03 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:27.920752 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:27.922298 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 96a34b94-97f2-4adc-b154-2745af2ca2b6 is in state SUCCESS 2026-01-03 01:00:27.922669 | orchestrator | 2026-01-03 01:00:27.922691 | orchestrator | 2026-01-03 01:00:27.922697 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-03 01:00:27.922718 | orchestrator | 2026-01-03 01:00:27.922722 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-03 01:00:27.922727 | orchestrator | Saturday 03 January 2026 00:59:16 +0000 (0:00:00.087) 0:00:00.087 ****** 2026-01-03 01:00:27.922730 | orchestrator | changed: [localhost] 2026-01-03 01:00:27.922735 | orchestrator | 2026-01-03 01:00:27.922740 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-03 01:00:27.922744 | orchestrator | Saturday 03 January 2026 00:59:17 +0000 (0:00:00.839) 0:00:00.926 ****** 2026-01-03 01:00:27.922747 | orchestrator | changed: [localhost] 2026-01-03 01:00:27.922751 | orchestrator | 2026-01-03 01:00:27.922755 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-03 01:00:27.922759 | orchestrator | Saturday 03 January 2026 00:59:52 +0000 (0:00:34.786) 0:00:35.713 ****** 2026-01-03 01:00:27.922778 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-03 01:00:27.922785 | orchestrator | changed: [localhost] 2026-01-03 01:00:27.922791 | orchestrator | 2026-01-03 01:00:27.922797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:00:27.922802 | orchestrator | 2026-01-03 01:00:27.922808 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:00:27.922814 | orchestrator | Saturday 03 January 2026 01:00:17 +0000 (0:00:25.553) 0:01:01.267 ****** 2026-01-03 01:00:27.922820 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:27.922826 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:27.922831 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:27.922837 | orchestrator | 2026-01-03 01:00:27.922843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:00:27.922849 | orchestrator | Saturday 03 January 2026 01:00:18 +0000 (0:00:00.310) 0:01:01.577 ****** 2026-01-03 01:00:27.922855 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-03 01:00:27.922861 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-03 01:00:27.922865 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-03 01:00:27.922869 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-03 01:00:27.922873 | orchestrator | 2026-01-03 01:00:27.922876 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-03 01:00:27.922880 | orchestrator | skipping: no hosts matched 2026-01-03 01:00:27.922885 | orchestrator | 2026-01-03 01:00:27.922889 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:00:27.922893 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:00:27.922900 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:00:27.922906 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:00:27.922909 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:00:27.922913 | orchestrator | 2026-01-03 01:00:27.922917 | orchestrator | 2026-01-03 01:00:27.922920 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:00:27.922924 | orchestrator | Saturday 03 January 2026 01:00:18 +0000 (0:00:00.556) 0:01:02.134 ****** 2026-01-03 01:00:27.922928 | orchestrator | =============================================================================== 2026-01-03 01:00:27.922932 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 34.79s 2026-01-03 01:00:27.922935 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.55s 2026-01-03 01:00:27.922939 | orchestrator | Ensure the destination directory exists --------------------------------- 0.84s 2026-01-03 01:00:27.922943 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-01-03 01:00:27.922951 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-03 01:00:27.922956 | orchestrator | 2026-01-03 01:00:27.922963 | orchestrator | 2026-01-03 01:00:27.922969 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-03 01:00:27.922974 | orchestrator | 2026-01-03 01:00:27.922980 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-03 01:00:27.922986 | orchestrator | Saturday 03 January 2026 00:59:35 +0000 (0:00:00.223) 0:00:00.223 ****** 2026-01-03 01:00:27.922992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-03 01:00:27.923000 | orchestrator | 2026-01-03 01:00:27.923009 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-03 01:00:27.923015 | orchestrator | Saturday 03 January 2026 00:59:35 +0000 (0:00:00.250) 0:00:00.473 ****** 2026-01-03 01:00:27.923020 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-03 01:00:27.923026 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-03 01:00:27.923032 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-03 01:00:27.923038 | orchestrator | 2026-01-03 01:00:27.923045 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-03 01:00:27.923051 | orchestrator | Saturday 03 January 2026 00:59:36 +0000 (0:00:01.245) 0:00:01.719 ****** 2026-01-03 01:00:27.923057 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-03 01:00:27.923063 | orchestrator | 2026-01-03 01:00:27.923069 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-03 01:00:27.923086 | orchestrator | Saturday 03 January 2026 00:59:38 +0000 (0:00:01.441) 0:00:03.161 ****** 2026-01-03 01:00:27.923090 | orchestrator | changed: [testbed-manager] 2026-01-03 01:00:27.923094 | orchestrator | 2026-01-03 01:00:27.923098 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-03 01:00:27.923102 | orchestrator | Saturday 03 January 2026 00:59:39 +0000 (0:00:00.925) 0:00:04.086 ****** 2026-01-03 01:00:27.923106 | orchestrator | changed: [testbed-manager] 2026-01-03 01:00:27.923110 | orchestrator | 2026-01-03 01:00:27.923113 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-03 01:00:27.923117 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.892) 0:00:04.979 ****** 2026-01-03 01:00:27.923121 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-03 01:00:27.923125 | orchestrator | ok: [testbed-manager] 2026-01-03 01:00:27.923128 | orchestrator | 2026-01-03 01:00:27.923132 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-03 01:00:27.923140 | orchestrator | Saturday 03 January 2026 01:00:16 +0000 (0:00:36.394) 0:00:41.374 ****** 2026-01-03 01:00:27.923144 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-03 01:00:27.923149 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-03 01:00:27.923155 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-03 01:00:27.923161 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-03 01:00:27.923166 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-03 01:00:27.923172 | orchestrator | 2026-01-03 01:00:27.923177 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-03 01:00:27.923183 | orchestrator | Saturday 03 January 2026 01:00:20 +0000 (0:00:03.989) 0:00:45.363 ****** 2026-01-03 01:00:27.923188 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-03 01:00:27.923194 | orchestrator | 2026-01-03 01:00:27.923200 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-03 01:00:27.923205 | orchestrator | Saturday 03 January 2026 01:00:21 +0000 (0:00:00.455) 0:00:45.818 ****** 2026-01-03 01:00:27.923210 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:00:27.923222 | orchestrator | 2026-01-03 01:00:27.923228 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-03 01:00:27.923234 | orchestrator | Saturday 03 January 2026 01:00:21 +0000 (0:00:00.141) 0:00:45.960 ****** 2026-01-03 01:00:27.923240 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:00:27.923245 | orchestrator | 2026-01-03 01:00:27.923250 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-03 01:00:27.923256 | orchestrator | Saturday 03 January 2026 01:00:21 +0000 (0:00:00.450) 0:00:46.411 ****** 2026-01-03 01:00:27.923262 | orchestrator | changed: [testbed-manager] 2026-01-03 01:00:27.923269 | orchestrator | 2026-01-03 01:00:27.923275 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-03 01:00:27.923280 | orchestrator | Saturday 03 January 2026 01:00:22 +0000 (0:00:01.328) 0:00:47.739 ****** 2026-01-03 01:00:27.923286 | orchestrator | changed: [testbed-manager] 2026-01-03 01:00:27.923292 | orchestrator | 2026-01-03 01:00:27.923298 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-03 01:00:27.923303 | orchestrator | Saturday 03 January 2026 01:00:23 +0000 (0:00:00.744) 0:00:48.484 ****** 2026-01-03 01:00:27.923309 | orchestrator | changed: [testbed-manager] 2026-01-03 01:00:27.923316 | orchestrator | 2026-01-03 01:00:27.923322 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-03 01:00:27.923327 | orchestrator | Saturday 03 January 2026 01:00:24 +0000 (0:00:00.599) 0:00:49.084 ****** 2026-01-03 01:00:27.923333 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-03 01:00:27.923340 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-03 01:00:27.923346 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-03 01:00:27.923366 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-03 01:00:27.923374 | orchestrator | 2026-01-03 01:00:27.923380 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:00:27.923387 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 01:00:27.923394 | orchestrator | 2026-01-03 01:00:27.923399 | orchestrator | 2026-01-03 01:00:27.923405 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:00:27.923411 | orchestrator | Saturday 03 January 2026 01:00:25 +0000 (0:00:01.404) 0:00:50.488 ****** 2026-01-03 01:00:27.923417 | orchestrator | =============================================================================== 2026-01-03 01:00:27.923422 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.39s 2026-01-03 01:00:27.923428 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2026-01-03 01:00:27.923434 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2026-01-03 01:00:27.923441 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.40s 2026-01-03 01:00:27.923447 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.33s 2026-01-03 01:00:27.923453 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-01-03 01:00:27.923459 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-01-03 01:00:27.923465 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-01-03 01:00:27.923472 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-01-03 01:00:27.923478 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-01-03 01:00:27.923484 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-01-03 01:00:27.923491 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.45s 2026-01-03 01:00:27.923505 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-01-03 01:00:27.923534 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-01-03 01:00:27.924735 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:27.926265 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:27.928077 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:27.929553 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:27.929638 | orchestrator | 2026-01-03 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:30.980880 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:30.982999 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:30.984233 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:30.985583 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:30.987453 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:30.988020 | orchestrator | 2026-01-03 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:34.043230 | orchestrator | 2026-01-03 01:00:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:34.045006 | orchestrator | 2026-01-03 01:00:34 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:34.047053 | orchestrator | 2026-01-03 01:00:34 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:34.049260 | orchestrator | 2026-01-03 01:00:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:34.051099 | orchestrator | 2026-01-03 01:00:34 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:34.051192 | orchestrator | 2026-01-03 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:37.095724 | orchestrator | 2026-01-03 01:00:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:37.097213 | orchestrator | 2026-01-03 01:00:37 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:37.098258 | orchestrator | 2026-01-03 01:00:37 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:37.099751 | orchestrator | 2026-01-03 01:00:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:37.101298 | orchestrator | 2026-01-03 01:00:37 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:37.101331 | orchestrator | 2026-01-03 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:40.149290 | orchestrator | 2026-01-03 01:00:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:40.152200 | orchestrator | 2026-01-03 01:00:40 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:40.153130 | orchestrator | 2026-01-03 01:00:40 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:40.154280 | orchestrator | 2026-01-03 01:00:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:40.158901 | orchestrator | 2026-01-03 01:00:40 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:40.159119 | orchestrator | 2026-01-03 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:43.201148 | orchestrator | 2026-01-03 01:00:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:43.202541 | orchestrator | 2026-01-03 01:00:43 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:43.203781 | orchestrator | 2026-01-03 01:00:43 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:43.205114 | orchestrator | 2026-01-03 01:00:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:43.206784 | orchestrator | 2026-01-03 01:00:43 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:43.206832 | orchestrator | 2026-01-03 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:46.253513 | orchestrator | 2026-01-03 01:00:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:46.255566 | orchestrator | 2026-01-03 01:00:46 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:46.257774 | orchestrator | 2026-01-03 01:00:46 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:46.261588 | orchestrator | 2026-01-03 01:00:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:46.264401 | orchestrator | 2026-01-03 01:00:46 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:46.264452 | orchestrator | 2026-01-03 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:49.308791 | orchestrator | 2026-01-03 01:00:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:49.311002 | orchestrator | 2026-01-03 01:00:49 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:49.312822 | orchestrator | 2026-01-03 01:00:49 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:49.315269 | orchestrator | 2026-01-03 01:00:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:49.317770 | orchestrator | 2026-01-03 01:00:49 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:49.317815 | orchestrator | 2026-01-03 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:52.364308 | orchestrator | 2026-01-03 01:00:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:52.365445 | orchestrator | 2026-01-03 01:00:52 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:52.366972 | orchestrator | 2026-01-03 01:00:52 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:52.369740 | orchestrator | 2026-01-03 01:00:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:52.370345 | orchestrator | 2026-01-03 01:00:52 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:52.370397 | orchestrator | 2026-01-03 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:55.422319 | orchestrator | 2026-01-03 01:00:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:55.426439 | orchestrator | 2026-01-03 01:00:55 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:55.427595 | orchestrator | 2026-01-03 01:00:55 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:55.429081 | orchestrator | 2026-01-03 01:00:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:55.430622 | orchestrator | 2026-01-03 01:00:55 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:55.430677 | orchestrator | 2026-01-03 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:58.478885 | orchestrator | 2026-01-03 01:00:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:00:58.480731 | orchestrator | 2026-01-03 01:00:58 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state STARTED 2026-01-03 01:00:58.482543 | orchestrator | 2026-01-03 01:00:58 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:00:58.484356 | orchestrator | 2026-01-03 01:00:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:00:58.486462 | orchestrator | 2026-01-03 01:00:58 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:00:58.486506 | orchestrator | 2026-01-03 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:01.531436 | orchestrator | 2026-01-03 01:01:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:01.533003 | orchestrator | 2026-01-03 01:01:01.533057 | orchestrator | 2026-01-03 01:01:01.533066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:01:01.533074 | orchestrator | 2026-01-03 01:01:01.533080 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:01:01.533086 | orchestrator | Saturday 03 January 2026 00:59:54 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-03 01:01:01.533093 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:01.533100 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:01.533106 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:01.533113 | orchestrator | 2026-01-03 01:01:01.533119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:01:01.533125 | orchestrator | Saturday 03 January 2026 00:59:54 +0000 (0:00:00.315) 0:00:00.568 ****** 2026-01-03 01:01:01.533133 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-03 01:01:01.533140 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-03 01:01:01.533146 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-03 01:01:01.533152 | orchestrator | 2026-01-03 01:01:01.533158 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-03 01:01:01.533164 | orchestrator | 2026-01-03 01:01:01.533170 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-03 01:01:01.533177 | orchestrator | Saturday 03 January 2026 00:59:54 +0000 (0:00:00.444) 0:00:01.013 ****** 2026-01-03 01:01:01.533184 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:01:01.533192 | orchestrator | 2026-01-03 01:01:01.533213 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-03 01:01:01.533217 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.588) 0:00:01.602 ****** 2026-01-03 01:01:01.533221 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-01-03 01:01:01.533225 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-01-03 01:01:01.533229 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-01-03 01:01:01.533233 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-01-03 01:01:01.533237 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-01-03 01:01:01.533265 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402059.1593397-3720-105104046602119/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402059.1593397-3720-105104046602119/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402059.1593397-3720-105104046602119/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_tjo1mi1p/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_tjo1mi1p/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_tjo1mi1p/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_tjo1mi1p/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_tjo1mi1p/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:01:01.533295 | orchestrator | 2026-01-03 01:01:01.533300 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:01.533304 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:01:01.533310 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:01.533315 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:01.533319 | orchestrator | 2026-01-03 01:01:01.533323 | orchestrator | 2026-01-03 01:01:01.533327 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:01.533331 | orchestrator | Saturday 03 January 2026 01:01:00 +0000 (0:01:05.551) 0:01:07.153 ****** 2026-01-03 01:01:01.533334 | orchestrator | =============================================================================== 2026-01-03 01:01:01.533338 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 65.55s 2026-01-03 01:01:01.533342 | orchestrator | placement : include_tasks ----------------------------------------------- 0.59s 2026-01-03 01:01:01.533346 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-01-03 01:01:01.533349 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-03 01:01:01.533356 | orchestrator | 2026-01-03 01:01:01 | INFO  | Task 90bb8377-f4a6-4662-936f-d0dfd86c477f is in state SUCCESS 2026-01-03 01:01:01.535170 | orchestrator | 2026-01-03 01:01:01 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state STARTED 2026-01-03 01:01:01.536526 | orchestrator | 2026-01-03 01:01:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:01.538761 | orchestrator | 2026-01-03 01:01:01 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:01.538796 | orchestrator | 2026-01-03 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:04.589583 | orchestrator | 2026-01-03 01:01:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:04.591894 | orchestrator | 2026-01-03 01:01:04 | INFO  | Task 8f61be31-47b8-4a6f-929b-67eabd0bd116 is in state SUCCESS 2026-01-03 01:01:04.592110 | orchestrator | 2026-01-03 01:01:04.592123 | orchestrator | 2026-01-03 01:01:04.592128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:01:04.592134 | orchestrator | 2026-01-03 01:01:04.592138 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:01:04.592143 | orchestrator | Saturday 03 January 2026 00:59:54 +0000 (0:00:00.261) 0:00:00.261 ****** 2026-01-03 01:01:04.592147 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:04.592153 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:04.592157 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:04.592161 | orchestrator | 2026-01-03 01:01:04.592166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:01:04.592170 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.318) 0:00:00.579 ****** 2026-01-03 01:01:04.592174 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-03 01:01:04.592179 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-03 01:01:04.592184 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-03 01:01:04.592188 | orchestrator | 2026-01-03 01:01:04.592193 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-03 01:01:04.592197 | orchestrator | 2026-01-03 01:01:04.592201 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-03 01:01:04.592205 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.612) 0:00:01.192 ****** 2026-01-03 01:01:04.592209 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:01:04.592214 | orchestrator | 2026-01-03 01:01:04.592218 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-03 01:01:04.592222 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.593) 0:00:01.785 ****** 2026-01-03 01:01:04.592226 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-01-03 01:01:04.592230 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-01-03 01:01:04.592234 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-01-03 01:01:04.592238 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-01-03 01:01:04.592242 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-01-03 01:01:04.592300 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402060.1044223-3747-111106249525385/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402060.1044223-3747-111106249525385/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402060.1044223-3747-111106249525385/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_vrfseb0b/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_vrfseb0b/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_vrfseb0b/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_vrfseb0b/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_vrfseb0b/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:01:04.592333 | orchestrator | 2026-01-03 01:01:04.592337 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:04.592342 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:01:04.592348 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:04.592355 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:04.592359 | orchestrator | 2026-01-03 01:01:04.592363 | orchestrator | 2026-01-03 01:01:04.592370 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:04.592377 | orchestrator | Saturday 03 January 2026 01:01:01 +0000 (0:01:05.344) 0:01:07.130 ****** 2026-01-03 01:01:04.592383 | orchestrator | =============================================================================== 2026-01-03 01:01:04.592390 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 65.35s 2026-01-03 01:01:04.592398 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-01-03 01:01:04.592407 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.59s 2026-01-03 01:01:04.592414 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-03 01:01:04.593760 | orchestrator | 2026-01-03 01:01:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:04.595730 | orchestrator | 2026-01-03 01:01:04 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:04.597429 | orchestrator | 2026-01-03 01:01:04 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:04.597479 | orchestrator | 2026-01-03 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:07.647577 | orchestrator | 2026-01-03 01:01:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:07.651061 | orchestrator | 2026-01-03 01:01:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:07.653874 | orchestrator | 2026-01-03 01:01:07 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:07.656643 | orchestrator | 2026-01-03 01:01:07 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:07.656792 | orchestrator | 2026-01-03 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:10.706704 | orchestrator | 2026-01-03 01:01:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:10.711363 | orchestrator | 2026-01-03 01:01:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:10.711408 | orchestrator | 2026-01-03 01:01:10 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:10.713527 | orchestrator | 2026-01-03 01:01:10 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:10.713715 | orchestrator | 2026-01-03 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:13.758951 | orchestrator | 2026-01-03 01:01:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:13.760115 | orchestrator | 2026-01-03 01:01:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:13.761207 | orchestrator | 2026-01-03 01:01:13 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:13.762359 | orchestrator | 2026-01-03 01:01:13 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:13.763073 | orchestrator | 2026-01-03 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:16.833390 | orchestrator | 2026-01-03 01:01:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:16.833476 | orchestrator | 2026-01-03 01:01:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:16.834936 | orchestrator | 2026-01-03 01:01:16 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:16.836551 | orchestrator | 2026-01-03 01:01:16 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:16.836599 | orchestrator | 2026-01-03 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:19.876102 | orchestrator | 2026-01-03 01:01:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:19.876460 | orchestrator | 2026-01-03 01:01:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:19.878264 | orchestrator | 2026-01-03 01:01:19 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:19.879083 | orchestrator | 2026-01-03 01:01:19 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:19.879122 | orchestrator | 2026-01-03 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:22.911691 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:22.913180 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:22.914739 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:22.915954 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:22.915993 | orchestrator | 2026-01-03 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:25.951072 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:25.951596 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:25.953038 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:25.954003 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:25.954078 | orchestrator | 2026-01-03 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:28.998932 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:28.999343 | orchestrator | 2026-01-03 01:01:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:29.000398 | orchestrator | 2026-01-03 01:01:29 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:29.001139 | orchestrator | 2026-01-03 01:01:29 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:29.001188 | orchestrator | 2026-01-03 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:32.057654 | orchestrator | 2026-01-03 01:01:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:32.058897 | orchestrator | 2026-01-03 01:01:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:32.061710 | orchestrator | 2026-01-03 01:01:32 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:32.064216 | orchestrator | 2026-01-03 01:01:32 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:32.064334 | orchestrator | 2026-01-03 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:35.101051 | orchestrator | 2026-01-03 01:01:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:35.102868 | orchestrator | 2026-01-03 01:01:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:35.104955 | orchestrator | 2026-01-03 01:01:35 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:35.106505 | orchestrator | 2026-01-03 01:01:35 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:35.106637 | orchestrator | 2026-01-03 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:38.159080 | orchestrator | 2026-01-03 01:01:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:38.161935 | orchestrator | 2026-01-03 01:01:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:38.161984 | orchestrator | 2026-01-03 01:01:38 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:38.162714 | orchestrator | 2026-01-03 01:01:38 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:38.162748 | orchestrator | 2026-01-03 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:41.210990 | orchestrator | 2026-01-03 01:01:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:41.212343 | orchestrator | 2026-01-03 01:01:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:41.214153 | orchestrator | 2026-01-03 01:01:41 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:41.215250 | orchestrator | 2026-01-03 01:01:41 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:41.215386 | orchestrator | 2026-01-03 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:44.261165 | orchestrator | 2026-01-03 01:01:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:44.265544 | orchestrator | 2026-01-03 01:01:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:44.268461 | orchestrator | 2026-01-03 01:01:44 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:44.269752 | orchestrator | 2026-01-03 01:01:44 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:44.269809 | orchestrator | 2026-01-03 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:47.311723 | orchestrator | 2026-01-03 01:01:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:47.311870 | orchestrator | 2026-01-03 01:01:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:47.312830 | orchestrator | 2026-01-03 01:01:47 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:47.314859 | orchestrator | 2026-01-03 01:01:47 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:47.314920 | orchestrator | 2026-01-03 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:50.369630 | orchestrator | 2026-01-03 01:01:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:50.370529 | orchestrator | 2026-01-03 01:01:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:50.372036 | orchestrator | 2026-01-03 01:01:50 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:50.373427 | orchestrator | 2026-01-03 01:01:50 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:50.373456 | orchestrator | 2026-01-03 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:53.422263 | orchestrator | 2026-01-03 01:01:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:53.424189 | orchestrator | 2026-01-03 01:01:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:53.426636 | orchestrator | 2026-01-03 01:01:53 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:53.428466 | orchestrator | 2026-01-03 01:01:53 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:53.428526 | orchestrator | 2026-01-03 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:56.467215 | orchestrator | 2026-01-03 01:01:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:56.468321 | orchestrator | 2026-01-03 01:01:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:56.468753 | orchestrator | 2026-01-03 01:01:56 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:56.470120 | orchestrator | 2026-01-03 01:01:56 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:56.470171 | orchestrator | 2026-01-03 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:59.523618 | orchestrator | 2026-01-03 01:01:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:01:59.526312 | orchestrator | 2026-01-03 01:01:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:01:59.528072 | orchestrator | 2026-01-03 01:01:59 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:01:59.529135 | orchestrator | 2026-01-03 01:01:59 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:01:59.529205 | orchestrator | 2026-01-03 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:02.574073 | orchestrator | 2026-01-03 01:02:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:02.575475 | orchestrator | 2026-01-03 01:02:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:02.576663 | orchestrator | 2026-01-03 01:02:02 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:02.577411 | orchestrator | 2026-01-03 01:02:02 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:02:02.577472 | orchestrator | 2026-01-03 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:05.616462 | orchestrator | 2026-01-03 01:02:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:05.617103 | orchestrator | 2026-01-03 01:02:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:05.618115 | orchestrator | 2026-01-03 01:02:05 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:05.619037 | orchestrator | 2026-01-03 01:02:05 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state STARTED 2026-01-03 01:02:05.619076 | orchestrator | 2026-01-03 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:08.666097 | orchestrator | 2026-01-03 01:02:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:08.667303 | orchestrator | 2026-01-03 01:02:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:08.669270 | orchestrator | 2026-01-03 01:02:08 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:08.670674 | orchestrator | 2026-01-03 01:02:08 | INFO  | Task 435201c2-df0a-4d13-84e5-6e0cdc7b047e is in state SUCCESS 2026-01-03 01:02:08.670874 | orchestrator | 2026-01-03 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:11.726889 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:11.727642 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:11.728814 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:11.728838 | orchestrator | 2026-01-03 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:14.778095 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:14.779210 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:14.781894 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:14.781931 | orchestrator | 2026-01-03 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:17.826503 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:17.828329 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:17.830181 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:17.830251 | orchestrator | 2026-01-03 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:20.874092 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:20.875885 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:20.877472 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:20.877512 | orchestrator | 2026-01-03 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:23.920769 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:23.922561 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:23.926158 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:23.926218 | orchestrator | 2026-01-03 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:26.974051 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:26.977234 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:26.979103 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:26.979288 | orchestrator | 2026-01-03 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:30.026242 | orchestrator | 2026-01-03 01:02:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:30.029469 | orchestrator | 2026-01-03 01:02:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:30.031359 | orchestrator | 2026-01-03 01:02:30 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:30.031412 | orchestrator | 2026-01-03 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:33.117913 | orchestrator | 2026-01-03 01:02:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:33.117997 | orchestrator | 2026-01-03 01:02:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:33.118004 | orchestrator | 2026-01-03 01:02:33 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:33.118008 | orchestrator | 2026-01-03 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:36.115206 | orchestrator | 2026-01-03 01:02:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:36.117136 | orchestrator | 2026-01-03 01:02:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:36.119196 | orchestrator | 2026-01-03 01:02:36 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:36.119267 | orchestrator | 2026-01-03 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:39.159155 | orchestrator | 2026-01-03 01:02:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:39.160707 | orchestrator | 2026-01-03 01:02:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:39.163170 | orchestrator | 2026-01-03 01:02:39 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:39.163212 | orchestrator | 2026-01-03 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:42.211726 | orchestrator | 2026-01-03 01:02:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:42.212967 | orchestrator | 2026-01-03 01:02:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:42.213990 | orchestrator | 2026-01-03 01:02:42 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:42.214060 | orchestrator | 2026-01-03 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:45.258343 | orchestrator | 2026-01-03 01:02:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:45.258889 | orchestrator | 2026-01-03 01:02:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:45.259631 | orchestrator | 2026-01-03 01:02:45 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:45.259673 | orchestrator | 2026-01-03 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:48.300333 | orchestrator | 2026-01-03 01:02:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:48.302881 | orchestrator | 2026-01-03 01:02:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:48.304825 | orchestrator | 2026-01-03 01:02:48 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:48.304893 | orchestrator | 2026-01-03 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:51.349046 | orchestrator | 2026-01-03 01:02:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:51.351808 | orchestrator | 2026-01-03 01:02:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:51.353987 | orchestrator | 2026-01-03 01:02:51 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:51.354063 | orchestrator | 2026-01-03 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:54.395690 | orchestrator | 2026-01-03 01:02:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:54.397187 | orchestrator | 2026-01-03 01:02:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:54.399485 | orchestrator | 2026-01-03 01:02:54 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:54.399765 | orchestrator | 2026-01-03 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:57.450674 | orchestrator | 2026-01-03 01:02:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:02:57.452658 | orchestrator | 2026-01-03 01:02:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:02:57.453851 | orchestrator | 2026-01-03 01:02:57 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:02:57.454143 | orchestrator | 2026-01-03 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:00.494509 | orchestrator | 2026-01-03 01:03:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:00.495625 | orchestrator | 2026-01-03 01:03:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:00.496950 | orchestrator | 2026-01-03 01:03:00 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:00.497646 | orchestrator | 2026-01-03 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:03.532580 | orchestrator | 2026-01-03 01:03:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:03.533508 | orchestrator | 2026-01-03 01:03:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:03.534634 | orchestrator | 2026-01-03 01:03:03 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:03.534660 | orchestrator | 2026-01-03 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:06.580513 | orchestrator | 2026-01-03 01:03:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:06.582416 | orchestrator | 2026-01-03 01:03:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:06.583285 | orchestrator | 2026-01-03 01:03:06 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:06.583353 | orchestrator | 2026-01-03 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:09.628635 | orchestrator | 2026-01-03 01:03:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:09.630918 | orchestrator | 2026-01-03 01:03:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:09.633667 | orchestrator | 2026-01-03 01:03:09 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:09.633723 | orchestrator | 2026-01-03 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:12.672026 | orchestrator | 2026-01-03 01:03:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:12.673817 | orchestrator | 2026-01-03 01:03:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:12.675820 | orchestrator | 2026-01-03 01:03:12 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:12.675875 | orchestrator | 2026-01-03 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:15.725165 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:15.727629 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:15.729116 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:15.729167 | orchestrator | 2026-01-03 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:18.771849 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:18.773109 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:18.775578 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:18.775638 | orchestrator | 2026-01-03 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:21.824162 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:21.828491 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:21.830935 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:21.831009 | orchestrator | 2026-01-03 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:24.878719 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:24.881328 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:24.883059 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:24.883138 | orchestrator | 2026-01-03 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:27.924340 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:27.925179 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:27.926435 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:27.926482 | orchestrator | 2026-01-03 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:30.982964 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:30.984557 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:30.986955 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:30.987055 | orchestrator | 2026-01-03 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:34.034451 | orchestrator | 2026-01-03 01:03:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:34.036816 | orchestrator | 2026-01-03 01:03:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:34.039910 | orchestrator | 2026-01-03 01:03:34 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:34.040248 | orchestrator | 2026-01-03 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:37.081116 | orchestrator | 2026-01-03 01:03:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:37.186885 | orchestrator | 2026-01-03 01:03:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:37.186937 | orchestrator | 2026-01-03 01:03:37 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:37.186948 | orchestrator | 2026-01-03 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:40.132045 | orchestrator | 2026-01-03 01:03:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:40.133983 | orchestrator | 2026-01-03 01:03:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:40.136168 | orchestrator | 2026-01-03 01:03:40 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:40.136247 | orchestrator | 2026-01-03 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:43.184576 | orchestrator | 2026-01-03 01:03:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:43.185697 | orchestrator | 2026-01-03 01:03:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:43.187050 | orchestrator | 2026-01-03 01:03:43 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:43.187084 | orchestrator | 2026-01-03 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:46.228201 | orchestrator | 2026-01-03 01:03:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:46.228678 | orchestrator | 2026-01-03 01:03:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:46.231774 | orchestrator | 2026-01-03 01:03:46 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:46.231941 | orchestrator | 2026-01-03 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:49.275695 | orchestrator | 2026-01-03 01:03:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:49.278080 | orchestrator | 2026-01-03 01:03:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:49.280474 | orchestrator | 2026-01-03 01:03:49 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:49.280620 | orchestrator | 2026-01-03 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:52.320486 | orchestrator | 2026-01-03 01:03:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:52.322686 | orchestrator | 2026-01-03 01:03:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:52.324431 | orchestrator | 2026-01-03 01:03:52 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state STARTED 2026-01-03 01:03:52.324469 | orchestrator | 2026-01-03 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:55.374173 | orchestrator | 2026-01-03 01:03:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:55.375318 | orchestrator | 2026-01-03 01:03:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:55.378311 | orchestrator | 2026-01-03 01:03:55 | INFO  | Task 71d80aac-ef3a-44d9-bde2-cee08eaaff39 is in state SUCCESS 2026-01-03 01:03:55.380203 | orchestrator | 2026-01-03 01:03:55.380261 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 01:03:55.380268 | orchestrator | 2.16.14 2026-01-03 01:03:55.380274 | orchestrator | 2026-01-03 01:03:55.380278 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-03 01:03:55.380283 | orchestrator | 2026-01-03 01:03:55.380288 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-03 01:03:55.380292 | orchestrator | Saturday 03 January 2026 01:00:30 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-01-03 01:03:55.380296 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380301 | orchestrator | 2026-01-03 01:03:55.380305 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-03 01:03:55.380309 | orchestrator | Saturday 03 January 2026 01:00:31 +0000 (0:00:01.635) 0:00:01.917 ****** 2026-01-03 01:03:55.380313 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380317 | orchestrator | 2026-01-03 01:03:55.380321 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-03 01:03:55.380325 | orchestrator | Saturday 03 January 2026 01:00:32 +0000 (0:00:01.026) 0:00:02.944 ****** 2026-01-03 01:03:55.380329 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380332 | orchestrator | 2026-01-03 01:03:55.380336 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-03 01:03:55.380340 | orchestrator | Saturday 03 January 2026 01:00:33 +0000 (0:00:01.032) 0:00:03.977 ****** 2026-01-03 01:03:55.380344 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380348 | orchestrator | 2026-01-03 01:03:55.380351 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-03 01:03:55.380355 | orchestrator | Saturday 03 January 2026 01:00:35 +0000 (0:00:01.186) 0:00:05.163 ****** 2026-01-03 01:03:55.380359 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380363 | orchestrator | 2026-01-03 01:03:55.380367 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-03 01:03:55.380371 | orchestrator | Saturday 03 January 2026 01:00:36 +0000 (0:00:01.032) 0:00:06.195 ****** 2026-01-03 01:03:55.380374 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380378 | orchestrator | 2026-01-03 01:03:55.380382 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-03 01:03:55.380405 | orchestrator | Saturday 03 January 2026 01:00:37 +0000 (0:00:01.043) 0:00:07.238 ****** 2026-01-03 01:03:55.380409 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380413 | orchestrator | 2026-01-03 01:03:55.380417 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-03 01:03:55.380423 | orchestrator | Saturday 03 January 2026 01:00:39 +0000 (0:00:02.111) 0:00:09.349 ****** 2026-01-03 01:03:55.380430 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380435 | orchestrator | 2026-01-03 01:03:55.380441 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-03 01:03:55.380447 | orchestrator | Saturday 03 January 2026 01:00:40 +0000 (0:00:01.168) 0:00:10.518 ****** 2026-01-03 01:03:55.380453 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.380459 | orchestrator | 2026-01-03 01:03:55.380465 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-03 01:03:55.380471 | orchestrator | Saturday 03 January 2026 01:01:42 +0000 (0:01:01.987) 0:01:12.506 ****** 2026-01-03 01:03:55.380476 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.380482 | orchestrator | 2026-01-03 01:03:55.380500 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:03:55.380507 | orchestrator | 2026-01-03 01:03:55.380521 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:03:55.380527 | orchestrator | Saturday 03 January 2026 01:01:42 +0000 (0:00:00.158) 0:01:12.664 ****** 2026-01-03 01:03:55.380546 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.380553 | orchestrator | 2026-01-03 01:03:55.380559 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:03:55.380566 | orchestrator | 2026-01-03 01:03:55.380573 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:03:55.380578 | orchestrator | Saturday 03 January 2026 01:01:54 +0000 (0:00:11.743) 0:01:24.408 ****** 2026-01-03 01:03:55.380585 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.380591 | orchestrator | 2026-01-03 01:03:55.380598 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:03:55.380604 | orchestrator | 2026-01-03 01:03:55.380611 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:03:55.380617 | orchestrator | Saturday 03 January 2026 01:01:55 +0000 (0:00:01.264) 0:01:25.673 ****** 2026-01-03 01:03:55.380623 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.380629 | orchestrator | 2026-01-03 01:03:55.380636 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:03:55.380643 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:03:55.380652 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:03:55.380660 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:03:55.380717 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:03:55.380725 | orchestrator | 2026-01-03 01:03:55.380754 | orchestrator | 2026-01-03 01:03:55.380760 | orchestrator | 2026-01-03 01:03:55.380767 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:03:55.380774 | orchestrator | Saturday 03 January 2026 01:02:06 +0000 (0:00:11.214) 0:01:36.887 ****** 2026-01-03 01:03:55.380781 | orchestrator | =============================================================================== 2026-01-03 01:03:55.380787 | orchestrator | Create admin user ------------------------------------------------------ 61.99s 2026-01-03 01:03:55.380808 | orchestrator | Restart ceph manager service ------------------------------------------- 24.22s 2026-01-03 01:03:55.380816 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2026-01-03 01:03:55.380832 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.64s 2026-01-03 01:03:55.380840 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.19s 2026-01-03 01:03:55.381063 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.17s 2026-01-03 01:03:55.381097 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2026-01-03 01:03:55.381104 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2026-01-03 01:03:55.381110 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.03s 2026-01-03 01:03:55.381117 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2026-01-03 01:03:55.381123 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-01-03 01:03:55.381129 | orchestrator | 2026-01-03 01:03:55.381137 | orchestrator | 2026-01-03 01:03:55.381143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:03:55.381150 | orchestrator | 2026-01-03 01:03:55.381156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:03:55.381163 | orchestrator | Saturday 03 January 2026 01:01:05 +0000 (0:00:00.274) 0:00:00.274 ****** 2026-01-03 01:03:55.381169 | orchestrator | ok: [testbed-manager] 2026-01-03 01:03:55.381221 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:03:55.381231 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:03:55.381237 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:03:55.381243 | orchestrator | ok: [testbed-node-3] 2026-01-03 01:03:55.381249 | orchestrator | ok: [testbed-node-4] 2026-01-03 01:03:55.381255 | orchestrator | ok: [testbed-node-5] 2026-01-03 01:03:55.381260 | orchestrator | 2026-01-03 01:03:55.381267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:03:55.381273 | orchestrator | Saturday 03 January 2026 01:01:06 +0000 (0:00:00.760) 0:00:01.034 ****** 2026-01-03 01:03:55.381280 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381286 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381293 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381312 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381319 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381325 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381331 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-03 01:03:55.381337 | orchestrator | 2026-01-03 01:03:55.381344 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-03 01:03:55.381348 | orchestrator | 2026-01-03 01:03:55.381352 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-03 01:03:55.381355 | orchestrator | Saturday 03 January 2026 01:01:06 +0000 (0:00:00.659) 0:00:01.694 ****** 2026-01-03 01:03:55.381361 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 01:03:55.381366 | orchestrator | 2026-01-03 01:03:55.381370 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-03 01:03:55.381376 | orchestrator | Saturday 03 January 2026 01:01:08 +0000 (0:00:01.357) 0:00:03.052 ****** 2026-01-03 01:03:55.381398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 01:03:55.381420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381610 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:03:55.381615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381688 | orchestrator | 2026-01-03 01:03:55.381695 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-03 01:03:55.381699 | orchestrator | Saturday 03 January 2026 01:01:10 +0000 (0:00:02.772) 0:00:05.824 ****** 2026-01-03 01:03:55.381706 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 01:03:55.381719 | orchestrator | 2026-01-03 01:03:55.381723 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-03 01:03:55.381726 | orchestrator | Saturday 03 January 2026 01:01:12 +0000 (0:00:01.344) 0:00:07.169 ****** 2026-01-03 01:03:55.381731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 01:03:55.381740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.381775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381803 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.381814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.381818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:03:55.382368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.382404 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.382440 | orchestrator | 2026-01-03 01:03:55.382451 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-03 01:03:55.382459 | orchestrator | Saturday 03 January 2026 01:01:18 +0000 (0:00:05.916) 0:00:13.086 ****** 2026-01-03 01:03:55.382466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 01:03:55.382477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382546 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382561 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.382571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:03:55.382575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382579 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.382583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382628 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.382632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382644 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.382651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382674 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.382678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382682 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.382686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382700 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.382704 | orchestrator | 2026-01-03 01:03:55.382708 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-03 01:03:55.382712 | orchestrator | Saturday 03 January 2026 01:01:20 +0000 (0:00:02.289) 0:00:15.375 ****** 2026-01-03 01:03:55.382716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 01:03:55.382724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382783 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.382791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:03:55.382810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382822 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382829 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.382841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.382902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382908 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.382914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382941 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.382947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382968 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.382975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.382981 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.382991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.382998 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.383004 | orchestrator | 2026-01-03 01:03:55.383011 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-03 01:03:55.383018 | orchestrator | Saturday 03 January 2026 01:01:22 +0000 (0:00:02.536) 0:00:17.912 ****** 2026-01-03 01:03:55.383029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 01:03:55.383048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383080 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.383084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383119 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:03:55.383171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383214 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.383223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.383244 | orchestrator | 2026-01-03 01:03:55.383248 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-03 01:03:55.383253 | orchestrator | Saturday 03 January 2026 01:01:29 +0000 (0:00:06.602) 0:00:24.515 ****** 2026-01-03 01:03:55.383258 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:03:55.383263 | orchestrator | 2026-01-03 01:03:55.383268 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-03 01:03:55.383275 | orchestrator | Saturday 03 January 2026 01:01:30 +0000 (0:00:01.269) 0:00:25.784 ****** 2026-01-03 01:03:55.383280 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.383287 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.383297 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.383303 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.383309 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.383315 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.383320 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.383326 | orchestrator | 2026-01-03 01:03:55.383331 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-03 01:03:55.383337 | orchestrator | Saturday 03 January 2026 01:01:31 +0000 (0:00:00.698) 0:00:26.483 ****** 2026-01-03 01:03:55.383343 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:03:55.383348 | orchestrator | 2026-01-03 01:03:55.383354 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-03 01:03:55.383361 | orchestrator | Saturday 03 January 2026 01:01:32 +0000 (0:00:00.684) 0:00:27.167 ****** 2026-01-03 01:03:55.383368 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383380 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383386 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383393 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383399 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:03:55.383409 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383421 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383425 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383429 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383433 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:03:55.383437 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383445 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383449 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383452 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383456 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383460 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383464 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383467 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383477 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383481 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383485 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383488 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383496 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383500 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383508 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383511 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383515 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383519 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.383523 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383527 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-03 01:03:55.383530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:03:55.383534 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-03 01:03:55.383538 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 01:03:55.383542 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 01:03:55.383546 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 01:03:55.383550 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 01:03:55.383553 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 01:03:55.383557 | orchestrator | 2026-01-03 01:03:55.383561 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-03 01:03:55.383564 | orchestrator | Saturday 03 January 2026 01:01:33 +0000 (0:00:01.705) 0:00:28.873 ****** 2026-01-03 01:03:55.383569 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383574 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.383578 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383582 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.383586 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383590 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.383594 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383598 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.383602 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383605 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.383609 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:03:55.383613 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.383617 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-03 01:03:55.383621 | orchestrator | 2026-01-03 01:03:55.383624 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-03 01:03:55.383628 | orchestrator | Saturday 03 January 2026 01:01:46 +0000 (0:00:13.022) 0:00:41.896 ****** 2026-01-03 01:03:55.383632 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383636 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.383639 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383647 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.383651 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383655 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.383659 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383663 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.383670 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383674 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.383678 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:03:55.383682 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.383686 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-03 01:03:55.383690 | orchestrator | 2026-01-03 01:03:55.383693 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-03 01:03:55.383697 | orchestrator | Saturday 03 January 2026 01:01:50 +0000 (0:00:03.239) 0:00:45.135 ****** 2026-01-03 01:03:55.383725 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383731 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.383735 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383739 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383743 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.383747 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.383751 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383755 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.383760 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-03 01:03:55.383766 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383772 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.383779 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:03:55.383786 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.383792 | orchestrator | 2026-01-03 01:03:55.383798 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-03 01:03:55.383805 | orchestrator | Saturday 03 January 2026 01:01:51 +0000 (0:00:01.580) 0:00:46.716 ****** 2026-01-03 01:03:55.383812 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:03:55.383818 | orchestrator | 2026-01-03 01:03:55.383825 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-03 01:03:55.383831 | orchestrator | Saturday 03 January 2026 01:01:52 +0000 (0:00:00.766) 0:00:47.483 ****** 2026-01-03 01:03:55.383837 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.383843 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.384029 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.384038 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.384042 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384046 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384050 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384056 | orchestrator | 2026-01-03 01:03:55.384063 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-03 01:03:55.384074 | orchestrator | Saturday 03 January 2026 01:01:53 +0000 (0:00:00.721) 0:00:48.205 ****** 2026-01-03 01:03:55.384090 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.384096 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384101 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384107 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384113 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.384119 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.384125 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.384131 | orchestrator | 2026-01-03 01:03:55.384137 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-03 01:03:55.384144 | orchestrator | Saturday 03 January 2026 01:01:55 +0000 (0:00:02.304) 0:00:50.509 ****** 2026-01-03 01:03:55.384150 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384157 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.384163 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384170 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384174 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.384178 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.384181 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384185 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.384189 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384193 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384196 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384200 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384204 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:03:55.384208 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384211 | orchestrator | 2026-01-03 01:03:55.384215 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-03 01:03:55.384219 | orchestrator | Saturday 03 January 2026 01:01:57 +0000 (0:00:01.741) 0:00:52.251 ****** 2026-01-03 01:03:55.384223 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384227 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384231 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.384235 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.384238 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384243 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.384248 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384255 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384260 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384270 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384277 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:03:55.384283 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384288 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-03 01:03:55.384294 | orchestrator | 2026-01-03 01:03:55.384300 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-03 01:03:55.384307 | orchestrator | Saturday 03 January 2026 01:01:59 +0000 (0:00:01.716) 0:00:53.967 ****** 2026-01-03 01:03:55.384320 | orchestrator | [WARNING]: Skipped 2026-01-03 01:03:55.384326 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-03 01:03:55.384334 | orchestrator | due to this access issue: 2026-01-03 01:03:55.384338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-03 01:03:55.384342 | orchestrator | not a directory 2026-01-03 01:03:55.384346 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:03:55.384350 | orchestrator | 2026-01-03 01:03:55.384353 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-03 01:03:55.384357 | orchestrator | Saturday 03 January 2026 01:02:00 +0000 (0:00:01.127) 0:00:55.094 ****** 2026-01-03 01:03:55.384361 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.384365 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.384369 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.384372 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.384376 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384380 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384384 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384388 | orchestrator | 2026-01-03 01:03:55.384392 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-03 01:03:55.384395 | orchestrator | Saturday 03 January 2026 01:02:01 +0000 (0:00:00.891) 0:00:55.986 ****** 2026-01-03 01:03:55.384399 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.384410 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.384415 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.384418 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.384423 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.384427 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.384430 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.384434 | orchestrator | 2026-01-03 01:03:55.384438 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-03 01:03:55.384446 | orchestrator | Saturday 03 January 2026 01:02:01 +0000 (0:00:00.673) 0:00:56.660 ****** 2026-01-03 01:03:55.384451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384461 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-03 01:03:55.384472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:03:55.384516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384569 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384628 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:03:55.384635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384656 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:03:55.384671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:03:55.384695 | orchestrator | 2026-01-03 01:03:55.384701 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-03 01:03:55.384708 | orchestrator | Saturday 03 January 2026 01:02:06 +0000 (0:00:04.615) 0:01:01.276 ****** 2026-01-03 01:03:55.384719 | orchestrator | changed: [testbed-manager] => { 2026-01-03 01:03:55.384727 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384734 | orchestrator | } 2026-01-03 01:03:55.384741 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 01:03:55.384748 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384755 | orchestrator | } 2026-01-03 01:03:55.384761 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 01:03:55.384767 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384773 | orchestrator | } 2026-01-03 01:03:55.384778 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 01:03:55.384784 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384793 | orchestrator | } 2026-01-03 01:03:55.384802 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 01:03:55.384807 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384812 | orchestrator | } 2026-01-03 01:03:55.384818 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 01:03:55.384824 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384836 | orchestrator | } 2026-01-03 01:03:55.384843 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 01:03:55.384871 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:03:55.384877 | orchestrator | } 2026-01-03 01:03:55.384882 | orchestrator | 2026-01-03 01:03:55.384888 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 01:03:55.384893 | orchestrator | Saturday 03 January 2026 01:02:07 +0000 (0:00:00.939) 0:01:02.215 ****** 2026-01-03 01:03:55.384899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-03 01:03:55.384909 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.384916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.384930 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:03:55.384944 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.384960 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.384967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.384974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.384980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.384988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.384994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.385016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.385049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385069 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:03:55.385075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385089 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:03:55.385099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:03:55.385108 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:03:55.385111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.385116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385124 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:03:55.385128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.385133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385141 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:03:55.385148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:03:55.385160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:03:55.385172 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:03:55.385179 | orchestrator | 2026-01-03 01:03:55.385185 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-03 01:03:55.385192 | orchestrator | Saturday 03 January 2026 01:02:09 +0000 (0:00:01.952) 0:01:04.167 ****** 2026-01-03 01:03:55.385201 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 01:03:55.385207 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:03:55.385214 | orchestrator | 2026-01-03 01:03:55.385220 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385226 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.975) 0:01:05.143 ****** 2026-01-03 01:03:55.385232 | orchestrator | 2026-01-03 01:03:55.385238 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385244 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.065) 0:01:05.209 ****** 2026-01-03 01:03:55.385249 | orchestrator | 2026-01-03 01:03:55.385255 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385261 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.063) 0:01:05.272 ****** 2026-01-03 01:03:55.385268 | orchestrator | 2026-01-03 01:03:55.385274 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385280 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.062) 0:01:05.335 ****** 2026-01-03 01:03:55.385286 | orchestrator | 2026-01-03 01:03:55.385292 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385298 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.087) 0:01:05.422 ****** 2026-01-03 01:03:55.385304 | orchestrator | 2026-01-03 01:03:55.385310 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385316 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.062) 0:01:05.484 ****** 2026-01-03 01:03:55.385322 | orchestrator | 2026-01-03 01:03:55.385329 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:03:55.385333 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.063) 0:01:05.548 ****** 2026-01-03 01:03:55.385337 | orchestrator | 2026-01-03 01:03:55.385341 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-03 01:03:55.385345 | orchestrator | Saturday 03 January 2026 01:02:10 +0000 (0:00:00.289) 0:01:05.837 ****** 2026-01-03 01:03:55.385349 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.385353 | orchestrator | 2026-01-03 01:03:55.385357 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-03 01:03:55.385367 | orchestrator | Saturday 03 January 2026 01:02:31 +0000 (0:00:20.793) 0:01:26.631 ****** 2026-01-03 01:03:55.385371 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:03:55.385375 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:03:55.385379 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.385383 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.385386 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.385390 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:03:55.385395 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.385398 | orchestrator | 2026-01-03 01:03:55.385402 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-03 01:03:55.385407 | orchestrator | Saturday 03 January 2026 01:02:45 +0000 (0:00:13.368) 0:01:40.000 ****** 2026-01-03 01:03:55.385411 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.385414 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.385418 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.385422 | orchestrator | 2026-01-03 01:03:55.385426 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-03 01:03:55.385430 | orchestrator | Saturday 03 January 2026 01:02:55 +0000 (0:00:10.372) 0:01:50.373 ****** 2026-01-03 01:03:55.385434 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.385438 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.385442 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.385445 | orchestrator | 2026-01-03 01:03:55.385449 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-03 01:03:55.385453 | orchestrator | Saturday 03 January 2026 01:03:00 +0000 (0:00:05.024) 0:01:55.398 ****** 2026-01-03 01:03:55.385458 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.385464 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.385471 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.385480 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:03:55.385487 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:03:55.385498 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.385504 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:03:55.385511 | orchestrator | 2026-01-03 01:03:55.385517 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-03 01:03:55.385523 | orchestrator | Saturday 03 January 2026 01:03:14 +0000 (0:00:14.036) 0:02:09.434 ****** 2026-01-03 01:03:55.385529 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.385536 | orchestrator | 2026-01-03 01:03:55.385548 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-03 01:03:55.385555 | orchestrator | Saturday 03 January 2026 01:03:28 +0000 (0:00:13.645) 0:02:23.080 ****** 2026-01-03 01:03:55.385562 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:03:55.385568 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:03:55.385574 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:03:55.385580 | orchestrator | 2026-01-03 01:03:55.385586 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-03 01:03:55.385592 | orchestrator | Saturday 03 January 2026 01:03:37 +0000 (0:00:09.526) 0:02:32.607 ****** 2026-01-03 01:03:55.385598 | orchestrator | changed: [testbed-manager] 2026-01-03 01:03:55.385604 | orchestrator | 2026-01-03 01:03:55.385610 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-03 01:03:55.385616 | orchestrator | Saturday 03 January 2026 01:03:43 +0000 (0:00:05.617) 0:02:38.224 ****** 2026-01-03 01:03:55.385623 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:03:55.385627 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:03:55.385632 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:03:55.385638 | orchestrator | 2026-01-03 01:03:55.385644 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:03:55.385651 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 01:03:55.385664 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:03:55.385671 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:03:55.385678 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:03:55.385684 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-03 01:03:55.385690 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-03 01:03:55.385695 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-03 01:03:55.385703 | orchestrator | 2026-01-03 01:03:55.385707 | orchestrator | 2026-01-03 01:03:55.385711 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:03:55.385714 | orchestrator | Saturday 03 January 2026 01:03:53 +0000 (0:00:10.295) 0:02:48.519 ****** 2026-01-03 01:03:55.385718 | orchestrator | =============================================================================== 2026-01-03 01:03:55.385722 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.79s 2026-01-03 01:03:55.385727 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.04s 2026-01-03 01:03:55.385731 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.65s 2026-01-03 01:03:55.385735 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.37s 2026-01-03 01:03:55.385739 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.02s 2026-01-03 01:03:55.385742 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.37s 2026-01-03 01:03:55.385746 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.30s 2026-01-03 01:03:55.385750 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.53s 2026-01-03 01:03:55.385754 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.60s 2026-01-03 01:03:55.385758 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.92s 2026-01-03 01:03:55.385765 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.62s 2026-01-03 01:03:55.385769 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.03s 2026-01-03 01:03:55.385773 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.62s 2026-01-03 01:03:55.385777 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.24s 2026-01-03 01:03:55.385781 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.77s 2026-01-03 01:03:55.385786 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.54s 2026-01-03 01:03:55.385792 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.30s 2026-01-03 01:03:55.385798 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.29s 2026-01-03 01:03:55.385805 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.95s 2026-01-03 01:03:55.385811 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.74s 2026-01-03 01:03:55.385823 | orchestrator | 2026-01-03 01:03:55 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:03:55.385830 | orchestrator | 2026-01-03 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:58.443874 | orchestrator | 2026-01-03 01:03:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:03:58.445035 | orchestrator | 2026-01-03 01:03:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:03:58.447659 | orchestrator | 2026-01-03 01:03:58 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:03:58.447712 | orchestrator | 2026-01-03 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:01.493037 | orchestrator | 2026-01-03 01:04:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:01.494497 | orchestrator | 2026-01-03 01:04:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:01.496255 | orchestrator | 2026-01-03 01:04:01 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:01.496311 | orchestrator | 2026-01-03 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:04.535147 | orchestrator | 2026-01-03 01:04:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:04.536950 | orchestrator | 2026-01-03 01:04:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:04.538610 | orchestrator | 2026-01-03 01:04:04 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:04.541941 | orchestrator | 2026-01-03 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:07.591042 | orchestrator | 2026-01-03 01:04:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:07.593809 | orchestrator | 2026-01-03 01:04:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:07.597276 | orchestrator | 2026-01-03 01:04:07 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:07.597738 | orchestrator | 2026-01-03 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:10.643095 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:10.645832 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:10.648422 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:10.648495 | orchestrator | 2026-01-03 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:13.693462 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:13.695949 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:13.696515 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:13.696679 | orchestrator | 2026-01-03 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:16.745503 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:16.747249 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:16.749355 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:16.749836 | orchestrator | 2026-01-03 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:19.800638 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:19.801978 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:19.804788 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:19.804835 | orchestrator | 2026-01-03 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:22.850952 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:22.852256 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:22.853819 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:22.853861 | orchestrator | 2026-01-03 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:25.899338 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:25.901551 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:25.904509 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:25.904572 | orchestrator | 2026-01-03 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:28.944569 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:28.945722 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:28.947495 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:28.947536 | orchestrator | 2026-01-03 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:31.992405 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:31.994577 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:31.996479 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:31.996564 | orchestrator | 2026-01-03 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:35.073585 | orchestrator | 2026-01-03 01:04:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:35.076025 | orchestrator | 2026-01-03 01:04:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:35.078738 | orchestrator | 2026-01-03 01:04:35 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:35.078802 | orchestrator | 2026-01-03 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:38.137811 | orchestrator | 2026-01-03 01:04:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:38.138418 | orchestrator | 2026-01-03 01:04:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:38.139801 | orchestrator | 2026-01-03 01:04:38 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:38.139840 | orchestrator | 2026-01-03 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:41.182983 | orchestrator | 2026-01-03 01:04:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:41.184698 | orchestrator | 2026-01-03 01:04:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:41.186772 | orchestrator | 2026-01-03 01:04:41 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:41.186835 | orchestrator | 2026-01-03 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:44.234442 | orchestrator | 2026-01-03 01:04:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:44.235716 | orchestrator | 2026-01-03 01:04:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:44.237524 | orchestrator | 2026-01-03 01:04:44 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:44.237984 | orchestrator | 2026-01-03 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:47.288759 | orchestrator | 2026-01-03 01:04:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:47.290779 | orchestrator | 2026-01-03 01:04:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:47.293246 | orchestrator | 2026-01-03 01:04:47 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:47.293287 | orchestrator | 2026-01-03 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:50.337594 | orchestrator | 2026-01-03 01:04:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:50.339124 | orchestrator | 2026-01-03 01:04:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:50.341072 | orchestrator | 2026-01-03 01:04:50 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:50.341107 | orchestrator | 2026-01-03 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:53.381529 | orchestrator | 2026-01-03 01:04:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:53.384834 | orchestrator | 2026-01-03 01:04:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:53.387224 | orchestrator | 2026-01-03 01:04:53 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:53.387291 | orchestrator | 2026-01-03 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:56.431307 | orchestrator | 2026-01-03 01:04:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:56.433231 | orchestrator | 2026-01-03 01:04:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:56.435721 | orchestrator | 2026-01-03 01:04:56 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:56.435788 | orchestrator | 2026-01-03 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:59.488526 | orchestrator | 2026-01-03 01:04:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:04:59.491679 | orchestrator | 2026-01-03 01:04:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:04:59.493681 | orchestrator | 2026-01-03 01:04:59 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:04:59.493736 | orchestrator | 2026-01-03 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:02.534443 | orchestrator | 2026-01-03 01:05:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:02.537427 | orchestrator | 2026-01-03 01:05:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:02.539735 | orchestrator | 2026-01-03 01:05:02 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:02.539840 | orchestrator | 2026-01-03 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:05.587040 | orchestrator | 2026-01-03 01:05:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:05.589791 | orchestrator | 2026-01-03 01:05:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:05.594755 | orchestrator | 2026-01-03 01:05:05 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:05.594805 | orchestrator | 2026-01-03 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:08.638916 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:08.640606 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:08.642970 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:08.643020 | orchestrator | 2026-01-03 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:11.686594 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:11.688810 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:11.690883 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:11.690950 | orchestrator | 2026-01-03 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:14.736634 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:14.739107 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:14.741578 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:14.741661 | orchestrator | 2026-01-03 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:17.782852 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:17.785082 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:17.787543 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:17.787591 | orchestrator | 2026-01-03 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:20.833375 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:20.835162 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:20.836971 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:20.836996 | orchestrator | 2026-01-03 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:23.881557 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:23.883110 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:23.884917 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:23.884976 | orchestrator | 2026-01-03 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:26.929311 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:26.930644 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:26.931253 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state STARTED 2026-01-03 01:05:26.931275 | orchestrator | 2026-01-03 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:29.982987 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:29.984640 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:29.986728 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task 0b3b3697-9315-4de5-a782-daa1211c95fb is in state SUCCESS 2026-01-03 01:05:29.988720 | orchestrator | 2026-01-03 01:05:29.988755 | orchestrator | 2026-01-03 01:05:29.988761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:05:29.988767 | orchestrator | 2026-01-03 01:05:29.988772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:05:29.988777 | orchestrator | Saturday 03 January 2026 01:03:58 +0000 (0:00:00.290) 0:00:00.290 ****** 2026-01-03 01:05:29.988792 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:29.988799 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:05:29.988804 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:05:29.988810 | orchestrator | 2026-01-03 01:05:29.988815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:05:29.988820 | orchestrator | Saturday 03 January 2026 01:03:58 +0000 (0:00:00.309) 0:00:00.600 ****** 2026-01-03 01:05:29.988826 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-03 01:05:29.988832 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-03 01:05:29.988837 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-03 01:05:29.988843 | orchestrator | 2026-01-03 01:05:29.988849 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-03 01:05:29.988855 | orchestrator | 2026-01-03 01:05:29.988860 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-03 01:05:29.988865 | orchestrator | Saturday 03 January 2026 01:03:58 +0000 (0:00:00.410) 0:00:01.010 ****** 2026-01-03 01:05:29.988871 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:05:29.988877 | orchestrator | 2026-01-03 01:05:29.988893 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-03 01:05:29.988898 | orchestrator | Saturday 03 January 2026 01:03:59 +0000 (0:00:00.519) 0:00:01.530 ****** 2026-01-03 01:05:29.988905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.988926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.988946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.988951 | orchestrator | 2026-01-03 01:05:29.988956 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-03 01:05:29.988962 | orchestrator | Saturday 03 January 2026 01:04:00 +0000 (0:00:00.806) 0:00:02.337 ****** 2026-01-03 01:05:29.988966 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:05:29.988971 | orchestrator | 2026-01-03 01:05:29.988976 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-03 01:05:29.988981 | orchestrator | Saturday 03 January 2026 01:04:01 +0000 (0:00:00.835) 0:00:03.172 ****** 2026-01-03 01:05:29.988985 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:05:29.988990 | orchestrator | 2026-01-03 01:05:29.988995 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-03 01:05:29.989009 | orchestrator | Saturday 03 January 2026 01:04:01 +0000 (0:00:00.646) 0:00:03.818 ****** 2026-01-03 01:05:29.989015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989037 | orchestrator | 2026-01-03 01:05:29.989042 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-03 01:05:29.989048 | orchestrator | Saturday 03 January 2026 01:04:03 +0000 (0:00:01.457) 0:00:05.276 ****** 2026-01-03 01:05:29.989051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989055 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989061 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989071 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989074 | orchestrator | 2026-01-03 01:05:29.989077 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-03 01:05:29.989082 | orchestrator | Saturday 03 January 2026 01:04:03 +0000 (0:00:00.441) 0:00:05.718 ****** 2026-01-03 01:05:29.989088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989100 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989112 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989122 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989128 | orchestrator | 2026-01-03 01:05:29.989133 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-03 01:05:29.989138 | orchestrator | Saturday 03 January 2026 01:04:04 +0000 (0:00:00.839) 0:00:06.558 ****** 2026-01-03 01:05:29.989145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989162 | orchestrator | 2026-01-03 01:05:29.989169 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-03 01:05:29.989174 | orchestrator | Saturday 03 January 2026 01:04:05 +0000 (0:00:01.474) 0:00:08.032 ****** 2026-01-03 01:05:29.989182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989198 | orchestrator | 2026-01-03 01:05:29.989204 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-03 01:05:29.989213 | orchestrator | Saturday 03 January 2026 01:04:07 +0000 (0:00:01.373) 0:00:09.405 ****** 2026-01-03 01:05:29.989218 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989223 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989229 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989234 | orchestrator | 2026-01-03 01:05:29.989239 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-03 01:05:29.989244 | orchestrator | Saturday 03 January 2026 01:04:07 +0000 (0:00:00.426) 0:00:09.832 ****** 2026-01-03 01:05:29.989249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:05:29.989253 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:05:29.989260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:05:29.989263 | orchestrator | 2026-01-03 01:05:29.989266 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-03 01:05:29.989269 | orchestrator | Saturday 03 January 2026 01:04:08 +0000 (0:00:01.218) 0:00:11.050 ****** 2026-01-03 01:05:29.989273 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:05:29.989276 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:05:29.989279 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:05:29.989282 | orchestrator | 2026-01-03 01:05:29.989285 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-03 01:05:29.989288 | orchestrator | Saturday 03 January 2026 01:04:10 +0000 (0:00:01.472) 0:00:12.522 ****** 2026-01-03 01:05:29.989291 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:05:29.989294 | orchestrator | 2026-01-03 01:05:29.989297 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-03 01:05:29.989300 | orchestrator | Saturday 03 January 2026 01:04:11 +0000 (0:00:00.704) 0:00:13.227 ****** 2026-01-03 01:05:29.989305 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:29.989310 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:05:29.989319 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:05:29.989324 | orchestrator | 2026-01-03 01:05:29.989329 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-03 01:05:29.989334 | orchestrator | Saturday 03 January 2026 01:04:11 +0000 (0:00:00.833) 0:00:14.060 ****** 2026-01-03 01:05:29.989339 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:29.989344 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:29.989349 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:29.989354 | orchestrator | 2026-01-03 01:05:29.989359 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-03 01:05:29.989365 | orchestrator | Saturday 03 January 2026 01:04:13 +0000 (0:00:01.536) 0:00:15.596 ****** 2026-01-03 01:05:29.989374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-03 01:05:29.989401 | orchestrator | 2026-01-03 01:05:29.989406 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-03 01:05:29.989412 | orchestrator | Saturday 03 January 2026 01:04:14 +0000 (0:00:01.030) 0:00:16.627 ****** 2026-01-03 01:05:29.989417 | orchestrator | changed: [testbed-node-0] => { 2026-01-03 01:05:29.989422 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:05:29.989425 | orchestrator | } 2026-01-03 01:05:29.989429 | orchestrator | changed: [testbed-node-1] => { 2026-01-03 01:05:29.989433 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:05:29.989436 | orchestrator | } 2026-01-03 01:05:29.989440 | orchestrator | changed: [testbed-node-2] => { 2026-01-03 01:05:29.989443 | orchestrator |  "msg": "Notifying handlers" 2026-01-03 01:05:29.989447 | orchestrator | } 2026-01-03 01:05:29.989451 | orchestrator | 2026-01-03 01:05:29.989455 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-03 01:05:29.989458 | orchestrator | Saturday 03 January 2026 01:04:14 +0000 (0:00:00.331) 0:00:16.958 ****** 2026-01-03 01:05:29.989462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989466 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989475 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-03 01:05:29.989483 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989489 | orchestrator | 2026-01-03 01:05:29.989493 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-03 01:05:29.989496 | orchestrator | Saturday 03 January 2026 01:04:15 +0000 (0:00:00.769) 0:00:17.728 ****** 2026-01-03 01:05:29.989500 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:29.989504 | orchestrator | 2026-01-03 01:05:29.989508 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-03 01:05:29.989512 | orchestrator | Saturday 03 January 2026 01:04:18 +0000 (0:00:02.411) 0:00:20.139 ****** 2026-01-03 01:05:29.989516 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:29.989519 | orchestrator | 2026-01-03 01:05:29.989523 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:05:29.989527 | orchestrator | Saturday 03 January 2026 01:04:20 +0000 (0:00:02.412) 0:00:22.551 ****** 2026-01-03 01:05:29.989530 | orchestrator | 2026-01-03 01:05:29.989534 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:05:29.989538 | orchestrator | Saturday 03 January 2026 01:04:20 +0000 (0:00:00.066) 0:00:22.618 ****** 2026-01-03 01:05:29.989542 | orchestrator | 2026-01-03 01:05:29.989546 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:05:29.989551 | orchestrator | Saturday 03 January 2026 01:04:20 +0000 (0:00:00.071) 0:00:22.690 ****** 2026-01-03 01:05:29.989555 | orchestrator | 2026-01-03 01:05:29.989559 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-03 01:05:29.989562 | orchestrator | Saturday 03 January 2026 01:04:20 +0000 (0:00:00.064) 0:00:22.754 ****** 2026-01-03 01:05:29.989566 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989569 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989573 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:29.989576 | orchestrator | 2026-01-03 01:05:29.989580 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-03 01:05:29.989584 | orchestrator | Saturday 03 January 2026 01:04:22 +0000 (0:00:01.701) 0:00:24.455 ****** 2026-01-03 01:05:29.989587 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989591 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989595 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-03 01:05:29.989599 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-03 01:05:29.989602 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-03 01:05:29.989606 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:29.989610 | orchestrator | 2026-01-03 01:05:29.989613 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-03 01:05:29.989617 | orchestrator | Saturday 03 January 2026 01:05:00 +0000 (0:00:38.423) 0:01:02.879 ****** 2026-01-03 01:05:29.989620 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989624 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:29.989628 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:29.989631 | orchestrator | 2026-01-03 01:05:29.989635 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-03 01:05:29.989639 | orchestrator | Saturday 03 January 2026 01:05:24 +0000 (0:00:23.616) 0:01:26.496 ****** 2026-01-03 01:05:29.989643 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:29.989646 | orchestrator | 2026-01-03 01:05:29.989650 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-03 01:05:29.989654 | orchestrator | Saturday 03 January 2026 01:05:26 +0000 (0:00:01.793) 0:01:28.289 ****** 2026-01-03 01:05:29.989658 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989661 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:29.989665 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:29.989669 | orchestrator | 2026-01-03 01:05:29.989672 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-03 01:05:29.989678 | orchestrator | Saturday 03 January 2026 01:05:26 +0000 (0:00:00.274) 0:01:28.564 ****** 2026-01-03 01:05:29.989683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-03 01:05:29.989692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-03 01:05:29.989697 | orchestrator | 2026-01-03 01:05:29.989701 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-03 01:05:29.989704 | orchestrator | Saturday 03 January 2026 01:05:28 +0000 (0:00:01.948) 0:01:30.512 ****** 2026-01-03 01:05:29.989708 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:29.989712 | orchestrator | 2026-01-03 01:05:29.989717 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:05:29.989725 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:05:29.989732 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:05:29.989737 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:05:29.989742 | orchestrator | 2026-01-03 01:05:29.989748 | orchestrator | 2026-01-03 01:05:29.989753 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:05:29.989759 | orchestrator | Saturday 03 January 2026 01:05:28 +0000 (0:00:00.251) 0:01:30.763 ****** 2026-01-03 01:05:29.989764 | orchestrator | =============================================================================== 2026-01-03 01:05:29.989769 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.42s 2026-01-03 01:05:29.989774 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.62s 2026-01-03 01:05:29.989778 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.41s 2026-01-03 01:05:29.989781 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.41s 2026-01-03 01:05:29.989784 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 1.95s 2026-01-03 01:05:29.989787 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.79s 2026-01-03 01:05:29.989790 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.70s 2026-01-03 01:05:29.989795 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.54s 2026-01-03 01:05:29.989799 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.47s 2026-01-03 01:05:29.989802 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.47s 2026-01-03 01:05:29.989805 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2026-01-03 01:05:29.989808 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.37s 2026-01-03 01:05:29.989811 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-01-03 01:05:29.989814 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.03s 2026-01-03 01:05:29.989817 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.84s 2026-01-03 01:05:29.989820 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-01-03 01:05:29.989823 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.83s 2026-01-03 01:05:29.989829 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2026-01-03 01:05:29.989832 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.77s 2026-01-03 01:05:29.989835 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.70s 2026-01-03 01:05:29.989839 | orchestrator | 2026-01-03 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:33.034555 | orchestrator | 2026-01-03 01:05:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:33.038177 | orchestrator | 2026-01-03 01:05:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:33.038255 | orchestrator | 2026-01-03 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:36.077194 | orchestrator | 2026-01-03 01:05:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:36.078971 | orchestrator | 2026-01-03 01:05:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:36.079119 | orchestrator | 2026-01-03 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:39.120063 | orchestrator | 2026-01-03 01:05:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:39.122216 | orchestrator | 2026-01-03 01:05:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:39.122291 | orchestrator | 2026-01-03 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:42.165668 | orchestrator | 2026-01-03 01:05:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:42.168297 | orchestrator | 2026-01-03 01:05:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:42.168366 | orchestrator | 2026-01-03 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:45.221163 | orchestrator | 2026-01-03 01:05:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:45.222567 | orchestrator | 2026-01-03 01:05:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:45.222628 | orchestrator | 2026-01-03 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:48.268921 | orchestrator | 2026-01-03 01:05:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:48.270286 | orchestrator | 2026-01-03 01:05:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:48.270323 | orchestrator | 2026-01-03 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:51.314501 | orchestrator | 2026-01-03 01:05:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:51.317441 | orchestrator | 2026-01-03 01:05:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:51.317488 | orchestrator | 2026-01-03 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:54.362297 | orchestrator | 2026-01-03 01:05:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:54.364026 | orchestrator | 2026-01-03 01:05:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:54.364095 | orchestrator | 2026-01-03 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:57.407398 | orchestrator | 2026-01-03 01:05:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:05:57.408303 | orchestrator | 2026-01-03 01:05:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:05:57.408354 | orchestrator | 2026-01-03 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:00.451370 | orchestrator | 2026-01-03 01:06:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:00.457242 | orchestrator | 2026-01-03 01:06:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:00.457308 | orchestrator | 2026-01-03 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:03.501639 | orchestrator | 2026-01-03 01:06:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:03.503517 | orchestrator | 2026-01-03 01:06:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:03.503598 | orchestrator | 2026-01-03 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:06.553013 | orchestrator | 2026-01-03 01:06:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:06.555359 | orchestrator | 2026-01-03 01:06:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:06.555456 | orchestrator | 2026-01-03 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:09.598829 | orchestrator | 2026-01-03 01:06:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:09.601771 | orchestrator | 2026-01-03 01:06:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:09.601839 | orchestrator | 2026-01-03 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:12.653852 | orchestrator | 2026-01-03 01:06:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:12.656362 | orchestrator | 2026-01-03 01:06:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:12.656997 | orchestrator | 2026-01-03 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:15.706495 | orchestrator | 2026-01-03 01:06:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:15.709090 | orchestrator | 2026-01-03 01:06:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:15.709178 | orchestrator | 2026-01-03 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:18.757637 | orchestrator | 2026-01-03 01:06:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:18.759909 | orchestrator | 2026-01-03 01:06:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:18.759988 | orchestrator | 2026-01-03 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:21.800208 | orchestrator | 2026-01-03 01:06:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:21.802141 | orchestrator | 2026-01-03 01:06:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:21.802201 | orchestrator | 2026-01-03 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:24.838547 | orchestrator | 2026-01-03 01:06:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:24.839508 | orchestrator | 2026-01-03 01:06:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:24.839553 | orchestrator | 2026-01-03 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:27.882954 | orchestrator | 2026-01-03 01:06:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:27.885308 | orchestrator | 2026-01-03 01:06:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:27.885371 | orchestrator | 2026-01-03 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:30.927102 | orchestrator | 2026-01-03 01:06:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:30.928963 | orchestrator | 2026-01-03 01:06:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:30.928988 | orchestrator | 2026-01-03 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:33.972950 | orchestrator | 2026-01-03 01:06:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:33.973998 | orchestrator | 2026-01-03 01:06:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:33.974048 | orchestrator | 2026-01-03 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:37.019716 | orchestrator | 2026-01-03 01:06:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:37.022818 | orchestrator | 2026-01-03 01:06:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:37.022947 | orchestrator | 2026-01-03 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:40.069693 | orchestrator | 2026-01-03 01:06:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:40.070990 | orchestrator | 2026-01-03 01:06:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:40.071037 | orchestrator | 2026-01-03 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:43.114265 | orchestrator | 2026-01-03 01:06:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:43.115460 | orchestrator | 2026-01-03 01:06:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:43.115504 | orchestrator | 2026-01-03 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:46.159012 | orchestrator | 2026-01-03 01:06:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:46.160862 | orchestrator | 2026-01-03 01:06:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:46.160930 | orchestrator | 2026-01-03 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:49.205953 | orchestrator | 2026-01-03 01:06:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:49.208723 | orchestrator | 2026-01-03 01:06:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:49.208788 | orchestrator | 2026-01-03 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:52.259928 | orchestrator | 2026-01-03 01:06:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:52.261267 | orchestrator | 2026-01-03 01:06:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:52.261339 | orchestrator | 2026-01-03 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:55.307545 | orchestrator | 2026-01-03 01:06:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:55.309426 | orchestrator | 2026-01-03 01:06:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:55.309492 | orchestrator | 2026-01-03 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:58.350194 | orchestrator | 2026-01-03 01:06:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:06:58.350866 | orchestrator | 2026-01-03 01:06:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:06:58.350961 | orchestrator | 2026-01-03 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:01.391930 | orchestrator | 2026-01-03 01:07:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:01.394315 | orchestrator | 2026-01-03 01:07:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:01.394425 | orchestrator | 2026-01-03 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:04.439006 | orchestrator | 2026-01-03 01:07:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:04.441541 | orchestrator | 2026-01-03 01:07:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:04.441626 | orchestrator | 2026-01-03 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:07.485089 | orchestrator | 2026-01-03 01:07:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:07.487682 | orchestrator | 2026-01-03 01:07:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:07.487812 | orchestrator | 2026-01-03 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:10.529511 | orchestrator | 2026-01-03 01:07:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:10.532038 | orchestrator | 2026-01-03 01:07:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:10.532093 | orchestrator | 2026-01-03 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:13.578366 | orchestrator | 2026-01-03 01:07:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:13.580738 | orchestrator | 2026-01-03 01:07:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:13.580847 | orchestrator | 2026-01-03 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:16.628069 | orchestrator | 2026-01-03 01:07:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:16.629688 | orchestrator | 2026-01-03 01:07:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:16.629753 | orchestrator | 2026-01-03 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:19.678953 | orchestrator | 2026-01-03 01:07:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:19.681195 | orchestrator | 2026-01-03 01:07:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:19.681242 | orchestrator | 2026-01-03 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:22.737625 | orchestrator | 2026-01-03 01:07:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:22.739458 | orchestrator | 2026-01-03 01:07:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:22.739518 | orchestrator | 2026-01-03 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:25.784146 | orchestrator | 2026-01-03 01:07:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:25.787514 | orchestrator | 2026-01-03 01:07:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:25.787603 | orchestrator | 2026-01-03 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:28.835442 | orchestrator | 2026-01-03 01:07:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:28.839640 | orchestrator | 2026-01-03 01:07:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:28.839735 | orchestrator | 2026-01-03 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:31.876834 | orchestrator | 2026-01-03 01:07:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:31.878590 | orchestrator | 2026-01-03 01:07:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:31.878671 | orchestrator | 2026-01-03 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:34.922830 | orchestrator | 2026-01-03 01:07:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:34.925211 | orchestrator | 2026-01-03 01:07:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:34.925298 | orchestrator | 2026-01-03 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:37.969717 | orchestrator | 2026-01-03 01:07:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:37.972176 | orchestrator | 2026-01-03 01:07:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:37.972234 | orchestrator | 2026-01-03 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:41.017087 | orchestrator | 2026-01-03 01:07:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:41.019062 | orchestrator | 2026-01-03 01:07:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:41.019135 | orchestrator | 2026-01-03 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:44.056290 | orchestrator | 2026-01-03 01:07:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:44.058635 | orchestrator | 2026-01-03 01:07:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:44.058759 | orchestrator | 2026-01-03 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:47.103777 | orchestrator | 2026-01-03 01:07:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:47.106251 | orchestrator | 2026-01-03 01:07:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:47.106297 | orchestrator | 2026-01-03 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:50.150881 | orchestrator | 2026-01-03 01:07:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:50.152490 | orchestrator | 2026-01-03 01:07:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:50.152550 | orchestrator | 2026-01-03 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:53.200702 | orchestrator | 2026-01-03 01:07:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:53.202143 | orchestrator | 2026-01-03 01:07:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:53.202195 | orchestrator | 2026-01-03 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:56.247484 | orchestrator | 2026-01-03 01:07:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:56.250263 | orchestrator | 2026-01-03 01:07:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:56.250340 | orchestrator | 2026-01-03 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:59.290009 | orchestrator | 2026-01-03 01:07:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:07:59.291302 | orchestrator | 2026-01-03 01:07:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:07:59.291438 | orchestrator | 2026-01-03 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:02.332701 | orchestrator | 2026-01-03 01:08:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:02.334890 | orchestrator | 2026-01-03 01:08:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:02.335036 | orchestrator | 2026-01-03 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:05.377623 | orchestrator | 2026-01-03 01:08:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:05.379657 | orchestrator | 2026-01-03 01:08:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:05.379776 | orchestrator | 2026-01-03 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:08.426451 | orchestrator | 2026-01-03 01:08:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:08.428240 | orchestrator | 2026-01-03 01:08:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:08.428348 | orchestrator | 2026-01-03 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:11.472600 | orchestrator | 2026-01-03 01:08:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:11.474629 | orchestrator | 2026-01-03 01:08:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:11.474671 | orchestrator | 2026-01-03 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:14.521061 | orchestrator | 2026-01-03 01:08:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:14.522830 | orchestrator | 2026-01-03 01:08:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:14.522871 | orchestrator | 2026-01-03 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:17.564459 | orchestrator | 2026-01-03 01:08:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:17.566736 | orchestrator | 2026-01-03 01:08:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:17.567043 | orchestrator | 2026-01-03 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:20.613419 | orchestrator | 2026-01-03 01:08:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:20.615677 | orchestrator | 2026-01-03 01:08:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:20.615722 | orchestrator | 2026-01-03 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:23.655730 | orchestrator | 2026-01-03 01:08:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:23.657777 | orchestrator | 2026-01-03 01:08:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:23.657818 | orchestrator | 2026-01-03 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:26.702172 | orchestrator | 2026-01-03 01:08:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:26.704247 | orchestrator | 2026-01-03 01:08:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:26.704469 | orchestrator | 2026-01-03 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:29.744315 | orchestrator | 2026-01-03 01:08:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:29.746723 | orchestrator | 2026-01-03 01:08:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:29.746945 | orchestrator | 2026-01-03 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:32.788065 | orchestrator | 2026-01-03 01:08:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:32.790125 | orchestrator | 2026-01-03 01:08:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:32.790232 | orchestrator | 2026-01-03 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:35.833740 | orchestrator | 2026-01-03 01:08:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:35.836227 | orchestrator | 2026-01-03 01:08:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:35.836301 | orchestrator | 2026-01-03 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:38.879685 | orchestrator | 2026-01-03 01:08:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:38.881471 | orchestrator | 2026-01-03 01:08:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:38.881685 | orchestrator | 2026-01-03 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:41.927283 | orchestrator | 2026-01-03 01:08:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:41.929817 | orchestrator | 2026-01-03 01:08:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:41.929863 | orchestrator | 2026-01-03 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:44.971230 | orchestrator | 2026-01-03 01:08:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:44.973263 | orchestrator | 2026-01-03 01:08:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:44.973350 | orchestrator | 2026-01-03 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:48.015777 | orchestrator | 2026-01-03 01:08:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:48.017161 | orchestrator | 2026-01-03 01:08:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:48.017242 | orchestrator | 2026-01-03 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:51.061416 | orchestrator | 2026-01-03 01:08:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:51.062654 | orchestrator | 2026-01-03 01:08:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:51.062689 | orchestrator | 2026-01-03 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:54.103428 | orchestrator | 2026-01-03 01:08:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:54.104858 | orchestrator | 2026-01-03 01:08:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:54.104994 | orchestrator | 2026-01-03 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:57.153045 | orchestrator | 2026-01-03 01:08:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:08:57.154953 | orchestrator | 2026-01-03 01:08:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:08:57.155026 | orchestrator | 2026-01-03 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:00.196822 | orchestrator | 2026-01-03 01:09:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:00.199352 | orchestrator | 2026-01-03 01:09:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:00.199400 | orchestrator | 2026-01-03 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:03.242315 | orchestrator | 2026-01-03 01:09:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:03.243832 | orchestrator | 2026-01-03 01:09:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:03.243901 | orchestrator | 2026-01-03 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:06.288164 | orchestrator | 2026-01-03 01:09:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:06.289754 | orchestrator | 2026-01-03 01:09:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:06.289790 | orchestrator | 2026-01-03 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:09.336200 | orchestrator | 2026-01-03 01:09:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:09.338501 | orchestrator | 2026-01-03 01:09:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:09.338571 | orchestrator | 2026-01-03 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:12.381502 | orchestrator | 2026-01-03 01:09:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:12.383188 | orchestrator | 2026-01-03 01:09:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:12.383246 | orchestrator | 2026-01-03 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:15.430589 | orchestrator | 2026-01-03 01:09:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:15.432962 | orchestrator | 2026-01-03 01:09:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:15.433018 | orchestrator | 2026-01-03 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:18.477570 | orchestrator | 2026-01-03 01:09:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:18.480540 | orchestrator | 2026-01-03 01:09:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:18.480624 | orchestrator | 2026-01-03 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:21.522483 | orchestrator | 2026-01-03 01:09:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:21.523963 | orchestrator | 2026-01-03 01:09:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:21.524052 | orchestrator | 2026-01-03 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:24.569047 | orchestrator | 2026-01-03 01:09:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:24.571730 | orchestrator | 2026-01-03 01:09:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:24.571835 | orchestrator | 2026-01-03 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:27.616492 | orchestrator | 2026-01-03 01:09:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:27.617830 | orchestrator | 2026-01-03 01:09:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:27.617888 | orchestrator | 2026-01-03 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:30.661215 | orchestrator | 2026-01-03 01:09:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:30.662406 | orchestrator | 2026-01-03 01:09:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:30.662447 | orchestrator | 2026-01-03 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:33.706065 | orchestrator | 2026-01-03 01:09:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:33.707557 | orchestrator | 2026-01-03 01:09:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:33.707609 | orchestrator | 2026-01-03 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:36.755627 | orchestrator | 2026-01-03 01:09:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:36.758076 | orchestrator | 2026-01-03 01:09:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:36.758159 | orchestrator | 2026-01-03 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:39.806871 | orchestrator | 2026-01-03 01:09:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:39.808503 | orchestrator | 2026-01-03 01:09:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:39.808617 | orchestrator | 2026-01-03 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:42.854930 | orchestrator | 2026-01-03 01:09:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:42.856227 | orchestrator | 2026-01-03 01:09:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:42.856574 | orchestrator | 2026-01-03 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:45.901577 | orchestrator | 2026-01-03 01:09:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:45.904097 | orchestrator | 2026-01-03 01:09:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:45.904132 | orchestrator | 2026-01-03 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:48.945652 | orchestrator | 2026-01-03 01:09:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:48.948124 | orchestrator | 2026-01-03 01:09:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:48.948180 | orchestrator | 2026-01-03 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:51.988291 | orchestrator | 2026-01-03 01:09:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:51.991046 | orchestrator | 2026-01-03 01:09:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:51.991152 | orchestrator | 2026-01-03 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:55.044305 | orchestrator | 2026-01-03 01:09:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:55.044481 | orchestrator | 2026-01-03 01:09:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:55.044956 | orchestrator | 2026-01-03 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:58.086099 | orchestrator | 2026-01-03 01:09:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:09:58.088391 | orchestrator | 2026-01-03 01:09:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:09:58.088516 | orchestrator | 2026-01-03 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:01.126346 | orchestrator | 2026-01-03 01:10:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:01.128869 | orchestrator | 2026-01-03 01:10:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:01.128948 | orchestrator | 2026-01-03 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:04.175269 | orchestrator | 2026-01-03 01:10:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:04.177723 | orchestrator | 2026-01-03 01:10:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:04.177825 | orchestrator | 2026-01-03 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:07.223216 | orchestrator | 2026-01-03 01:10:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:07.224344 | orchestrator | 2026-01-03 01:10:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:07.224433 | orchestrator | 2026-01-03 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:10.269829 | orchestrator | 2026-01-03 01:10:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:10.271434 | orchestrator | 2026-01-03 01:10:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:10.271480 | orchestrator | 2026-01-03 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:13.316937 | orchestrator | 2026-01-03 01:10:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:13.318154 | orchestrator | 2026-01-03 01:10:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:13.318215 | orchestrator | 2026-01-03 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:16.373322 | orchestrator | 2026-01-03 01:10:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:16.375315 | orchestrator | 2026-01-03 01:10:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:16.375413 | orchestrator | 2026-01-03 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:19.418169 | orchestrator | 2026-01-03 01:10:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:19.420835 | orchestrator | 2026-01-03 01:10:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:19.420979 | orchestrator | 2026-01-03 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:22.467630 | orchestrator | 2026-01-03 01:10:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:22.470116 | orchestrator | 2026-01-03 01:10:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:22.470191 | orchestrator | 2026-01-03 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:25.517536 | orchestrator | 2026-01-03 01:10:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:25.518655 | orchestrator | 2026-01-03 01:10:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:25.518743 | orchestrator | 2026-01-03 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:28.561460 | orchestrator | 2026-01-03 01:10:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:28.563755 | orchestrator | 2026-01-03 01:10:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:28.563805 | orchestrator | 2026-01-03 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:31.606612 | orchestrator | 2026-01-03 01:10:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:31.609746 | orchestrator | 2026-01-03 01:10:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:31.609852 | orchestrator | 2026-01-03 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:34.659220 | orchestrator | 2026-01-03 01:10:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:34.661329 | orchestrator | 2026-01-03 01:10:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:34.661383 | orchestrator | 2026-01-03 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:37.707823 | orchestrator | 2026-01-03 01:10:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:37.710198 | orchestrator | 2026-01-03 01:10:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:37.710833 | orchestrator | 2026-01-03 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:40.753191 | orchestrator | 2026-01-03 01:10:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:40.755194 | orchestrator | 2026-01-03 01:10:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:40.755246 | orchestrator | 2026-01-03 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:43.802111 | orchestrator | 2026-01-03 01:10:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:43.803361 | orchestrator | 2026-01-03 01:10:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:43.803404 | orchestrator | 2026-01-03 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:46.853725 | orchestrator | 2026-01-03 01:10:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:46.856127 | orchestrator | 2026-01-03 01:10:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:46.856184 | orchestrator | 2026-01-03 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:49.904113 | orchestrator | 2026-01-03 01:10:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:49.905830 | orchestrator | 2026-01-03 01:10:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:49.906235 | orchestrator | 2026-01-03 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:52.949241 | orchestrator | 2026-01-03 01:10:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:52.950237 | orchestrator | 2026-01-03 01:10:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:52.950289 | orchestrator | 2026-01-03 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:56.000883 | orchestrator | 2026-01-03 01:10:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:56.002246 | orchestrator | 2026-01-03 01:10:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:56.002303 | orchestrator | 2026-01-03 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:59.043087 | orchestrator | 2026-01-03 01:10:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:10:59.044015 | orchestrator | 2026-01-03 01:10:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:10:59.044050 | orchestrator | 2026-01-03 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:02.084134 | orchestrator | 2026-01-03 01:11:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:02.091334 | orchestrator | 2026-01-03 01:11:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:02.091407 | orchestrator | 2026-01-03 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:05.130107 | orchestrator | 2026-01-03 01:11:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:05.131320 | orchestrator | 2026-01-03 01:11:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:05.132064 | orchestrator | 2026-01-03 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:08.171445 | orchestrator | 2026-01-03 01:11:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:08.172987 | orchestrator | 2026-01-03 01:11:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:08.173094 | orchestrator | 2026-01-03 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:11.211397 | orchestrator | 2026-01-03 01:11:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:11.213512 | orchestrator | 2026-01-03 01:11:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:11.213826 | orchestrator | 2026-01-03 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:14.262160 | orchestrator | 2026-01-03 01:11:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:14.264261 | orchestrator | 2026-01-03 01:11:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:14.264311 | orchestrator | 2026-01-03 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:17.310201 | orchestrator | 2026-01-03 01:11:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:17.312344 | orchestrator | 2026-01-03 01:11:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:17.312449 | orchestrator | 2026-01-03 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:20.356359 | orchestrator | 2026-01-03 01:11:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:20.359441 | orchestrator | 2026-01-03 01:11:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:20.359560 | orchestrator | 2026-01-03 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:23.406608 | orchestrator | 2026-01-03 01:11:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:23.408647 | orchestrator | 2026-01-03 01:11:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:23.408714 | orchestrator | 2026-01-03 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:26.457731 | orchestrator | 2026-01-03 01:11:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:26.459613 | orchestrator | 2026-01-03 01:11:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:26.459683 | orchestrator | 2026-01-03 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:29.509290 | orchestrator | 2026-01-03 01:11:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:29.511061 | orchestrator | 2026-01-03 01:11:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:29.511112 | orchestrator | 2026-01-03 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:32.558369 | orchestrator | 2026-01-03 01:11:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:32.560506 | orchestrator | 2026-01-03 01:11:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:32.560557 | orchestrator | 2026-01-03 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:35.604783 | orchestrator | 2026-01-03 01:11:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:35.606443 | orchestrator | 2026-01-03 01:11:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:35.606496 | orchestrator | 2026-01-03 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:38.654534 | orchestrator | 2026-01-03 01:11:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:38.655739 | orchestrator | 2026-01-03 01:11:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:38.655829 | orchestrator | 2026-01-03 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:41.703844 | orchestrator | 2026-01-03 01:11:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:41.705389 | orchestrator | 2026-01-03 01:11:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:41.705469 | orchestrator | 2026-01-03 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:44.755558 | orchestrator | 2026-01-03 01:11:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:44.757855 | orchestrator | 2026-01-03 01:11:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:44.758052 | orchestrator | 2026-01-03 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:47.805818 | orchestrator | 2026-01-03 01:11:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:47.808058 | orchestrator | 2026-01-03 01:11:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:47.808150 | orchestrator | 2026-01-03 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:50.852424 | orchestrator | 2026-01-03 01:11:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:50.854404 | orchestrator | 2026-01-03 01:11:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:50.854447 | orchestrator | 2026-01-03 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:53.900974 | orchestrator | 2026-01-03 01:11:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:53.902313 | orchestrator | 2026-01-03 01:11:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:53.902548 | orchestrator | 2026-01-03 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:56.948631 | orchestrator | 2026-01-03 01:11:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:56.951210 | orchestrator | 2026-01-03 01:11:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:56.951279 | orchestrator | 2026-01-03 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:59.996427 | orchestrator | 2026-01-03 01:11:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:11:59.998537 | orchestrator | 2026-01-03 01:11:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:11:59.998696 | orchestrator | 2026-01-03 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:03.053500 | orchestrator | 2026-01-03 01:12:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:03.053554 | orchestrator | 2026-01-03 01:12:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:03.053560 | orchestrator | 2026-01-03 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:06.088289 | orchestrator | 2026-01-03 01:12:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:06.089176 | orchestrator | 2026-01-03 01:12:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:06.089576 | orchestrator | 2026-01-03 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:09.134634 | orchestrator | 2026-01-03 01:12:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:09.136020 | orchestrator | 2026-01-03 01:12:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:09.136082 | orchestrator | 2026-01-03 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:12.183458 | orchestrator | 2026-01-03 01:12:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:12.185512 | orchestrator | 2026-01-03 01:12:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:12.185556 | orchestrator | 2026-01-03 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:15.232079 | orchestrator | 2026-01-03 01:12:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:15.234305 | orchestrator | 2026-01-03 01:12:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:15.234383 | orchestrator | 2026-01-03 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:18.283040 | orchestrator | 2026-01-03 01:12:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:18.284823 | orchestrator | 2026-01-03 01:12:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:18.284986 | orchestrator | 2026-01-03 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:21.329385 | orchestrator | 2026-01-03 01:12:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:21.331140 | orchestrator | 2026-01-03 01:12:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:21.331282 | orchestrator | 2026-01-03 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:24.377054 | orchestrator | 2026-01-03 01:12:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:24.379038 | orchestrator | 2026-01-03 01:12:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:24.379109 | orchestrator | 2026-01-03 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:27.425095 | orchestrator | 2026-01-03 01:12:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:27.427355 | orchestrator | 2026-01-03 01:12:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:27.427404 | orchestrator | 2026-01-03 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:30.477434 | orchestrator | 2026-01-03 01:12:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:30.479597 | orchestrator | 2026-01-03 01:12:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:30.479722 | orchestrator | 2026-01-03 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:33.524765 | orchestrator | 2026-01-03 01:12:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:33.527336 | orchestrator | 2026-01-03 01:12:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:33.527423 | orchestrator | 2026-01-03 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:36.569570 | orchestrator | 2026-01-03 01:12:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:36.571718 | orchestrator | 2026-01-03 01:12:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:36.571783 | orchestrator | 2026-01-03 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:39.612649 | orchestrator | 2026-01-03 01:12:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:39.614541 | orchestrator | 2026-01-03 01:12:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:39.614682 | orchestrator | 2026-01-03 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:42.663968 | orchestrator | 2026-01-03 01:12:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:42.666055 | orchestrator | 2026-01-03 01:12:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:42.666116 | orchestrator | 2026-01-03 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:45.711373 | orchestrator | 2026-01-03 01:12:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:45.715289 | orchestrator | 2026-01-03 01:12:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:45.715466 | orchestrator | 2026-01-03 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:48.765632 | orchestrator | 2026-01-03 01:12:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:48.767654 | orchestrator | 2026-01-03 01:12:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:48.767993 | orchestrator | 2026-01-03 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:51.818754 | orchestrator | 2026-01-03 01:12:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:51.820704 | orchestrator | 2026-01-03 01:12:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:51.820776 | orchestrator | 2026-01-03 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:54.867040 | orchestrator | 2026-01-03 01:12:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:54.868360 | orchestrator | 2026-01-03 01:12:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:54.868632 | orchestrator | 2026-01-03 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:57.913709 | orchestrator | 2026-01-03 01:12:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:12:57.914890 | orchestrator | 2026-01-03 01:12:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:12:57.914991 | orchestrator | 2026-01-03 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:00.965874 | orchestrator | 2026-01-03 01:13:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:00.967409 | orchestrator | 2026-01-03 01:13:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:00.967452 | orchestrator | 2026-01-03 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:04.020324 | orchestrator | 2026-01-03 01:13:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:04.022129 | orchestrator | 2026-01-03 01:13:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:04.022242 | orchestrator | 2026-01-03 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:07.068422 | orchestrator | 2026-01-03 01:13:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:07.071655 | orchestrator | 2026-01-03 01:13:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:07.072087 | orchestrator | 2026-01-03 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:10.114692 | orchestrator | 2026-01-03 01:13:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:10.117125 | orchestrator | 2026-01-03 01:13:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:10.117271 | orchestrator | 2026-01-03 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:13.164106 | orchestrator | 2026-01-03 01:13:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:13.165529 | orchestrator | 2026-01-03 01:13:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:13.165625 | orchestrator | 2026-01-03 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:16.207643 | orchestrator | 2026-01-03 01:13:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:16.208735 | orchestrator | 2026-01-03 01:13:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:16.208773 | orchestrator | 2026-01-03 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:19.254233 | orchestrator | 2026-01-03 01:13:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:19.256698 | orchestrator | 2026-01-03 01:13:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:19.256976 | orchestrator | 2026-01-03 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:22.300054 | orchestrator | 2026-01-03 01:13:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:22.301130 | orchestrator | 2026-01-03 01:13:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:22.301527 | orchestrator | 2026-01-03 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:25.347771 | orchestrator | 2026-01-03 01:13:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:25.351073 | orchestrator | 2026-01-03 01:13:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:25.351149 | orchestrator | 2026-01-03 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:28.401722 | orchestrator | 2026-01-03 01:13:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:28.403005 | orchestrator | 2026-01-03 01:13:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:28.403090 | orchestrator | 2026-01-03 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:31.442320 | orchestrator | 2026-01-03 01:13:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:31.443579 | orchestrator | 2026-01-03 01:13:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:31.443626 | orchestrator | 2026-01-03 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:34.490210 | orchestrator | 2026-01-03 01:13:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:34.491807 | orchestrator | 2026-01-03 01:13:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:34.491912 | orchestrator | 2026-01-03 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:37.535958 | orchestrator | 2026-01-03 01:13:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:37.538415 | orchestrator | 2026-01-03 01:13:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:37.538503 | orchestrator | 2026-01-03 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:40.587110 | orchestrator | 2026-01-03 01:13:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:40.589382 | orchestrator | 2026-01-03 01:13:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:40.589496 | orchestrator | 2026-01-03 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:43.641779 | orchestrator | 2026-01-03 01:13:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:43.644140 | orchestrator | 2026-01-03 01:13:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:43.644197 | orchestrator | 2026-01-03 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:46.690969 | orchestrator | 2026-01-03 01:13:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:46.692576 | orchestrator | 2026-01-03 01:13:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:46.692688 | orchestrator | 2026-01-03 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:49.741232 | orchestrator | 2026-01-03 01:13:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:49.744125 | orchestrator | 2026-01-03 01:13:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:49.744304 | orchestrator | 2026-01-03 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:52.789330 | orchestrator | 2026-01-03 01:13:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:52.791384 | orchestrator | 2026-01-03 01:13:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:52.791432 | orchestrator | 2026-01-03 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:55.840999 | orchestrator | 2026-01-03 01:13:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:55.842632 | orchestrator | 2026-01-03 01:13:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:55.842690 | orchestrator | 2026-01-03 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:58.885092 | orchestrator | 2026-01-03 01:13:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:13:58.887539 | orchestrator | 2026-01-03 01:13:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:13:58.887633 | orchestrator | 2026-01-03 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:01.936633 | orchestrator | 2026-01-03 01:14:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:01.938828 | orchestrator | 2026-01-03 01:14:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:01.938976 | orchestrator | 2026-01-03 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:04.984293 | orchestrator | 2026-01-03 01:14:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:04.985487 | orchestrator | 2026-01-03 01:14:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:04.985532 | orchestrator | 2026-01-03 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:08.027791 | orchestrator | 2026-01-03 01:14:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:08.028970 | orchestrator | 2026-01-03 01:14:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:08.029014 | orchestrator | 2026-01-03 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:11.071829 | orchestrator | 2026-01-03 01:14:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:11.074080 | orchestrator | 2026-01-03 01:14:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:11.074192 | orchestrator | 2026-01-03 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:14.111843 | orchestrator | 2026-01-03 01:14:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:14.113759 | orchestrator | 2026-01-03 01:14:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:14.113855 | orchestrator | 2026-01-03 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:17.161945 | orchestrator | 2026-01-03 01:14:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:17.163012 | orchestrator | 2026-01-03 01:14:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:17.163071 | orchestrator | 2026-01-03 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:20.211420 | orchestrator | 2026-01-03 01:14:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:20.213100 | orchestrator | 2026-01-03 01:14:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:20.213162 | orchestrator | 2026-01-03 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:23.259403 | orchestrator | 2026-01-03 01:14:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:23.262318 | orchestrator | 2026-01-03 01:14:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:23.262391 | orchestrator | 2026-01-03 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:26.313668 | orchestrator | 2026-01-03 01:14:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:26.317029 | orchestrator | 2026-01-03 01:14:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:26.317148 | orchestrator | 2026-01-03 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:29.357102 | orchestrator | 2026-01-03 01:14:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:29.358051 | orchestrator | 2026-01-03 01:14:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:29.358107 | orchestrator | 2026-01-03 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:32.403061 | orchestrator | 2026-01-03 01:14:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:32.404970 | orchestrator | 2026-01-03 01:14:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:32.405030 | orchestrator | 2026-01-03 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:35.447700 | orchestrator | 2026-01-03 01:14:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:35.449857 | orchestrator | 2026-01-03 01:14:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:35.449943 | orchestrator | 2026-01-03 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:38.495557 | orchestrator | 2026-01-03 01:14:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:38.497105 | orchestrator | 2026-01-03 01:14:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:38.497194 | orchestrator | 2026-01-03 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:41.540007 | orchestrator | 2026-01-03 01:14:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:41.542407 | orchestrator | 2026-01-03 01:14:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:41.542508 | orchestrator | 2026-01-03 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:44.588004 | orchestrator | 2026-01-03 01:14:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:44.589438 | orchestrator | 2026-01-03 01:14:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:44.589489 | orchestrator | 2026-01-03 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:47.638815 | orchestrator | 2026-01-03 01:14:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:47.639612 | orchestrator | 2026-01-03 01:14:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:47.639638 | orchestrator | 2026-01-03 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:50.683669 | orchestrator | 2026-01-03 01:14:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:50.685706 | orchestrator | 2026-01-03 01:14:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:50.685776 | orchestrator | 2026-01-03 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:53.731259 | orchestrator | 2026-01-03 01:14:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:53.732722 | orchestrator | 2026-01-03 01:14:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:53.732763 | orchestrator | 2026-01-03 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:56.778916 | orchestrator | 2026-01-03 01:14:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:56.779851 | orchestrator | 2026-01-03 01:14:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:56.779897 | orchestrator | 2026-01-03 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:59.827951 | orchestrator | 2026-01-03 01:14:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:14:59.829166 | orchestrator | 2026-01-03 01:14:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:14:59.829239 | orchestrator | 2026-01-03 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:02.878176 | orchestrator | 2026-01-03 01:15:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:02.880112 | orchestrator | 2026-01-03 01:15:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:02.880183 | orchestrator | 2026-01-03 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:05.926045 | orchestrator | 2026-01-03 01:15:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:05.928500 | orchestrator | 2026-01-03 01:15:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:05.928610 | orchestrator | 2026-01-03 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:08.976092 | orchestrator | 2026-01-03 01:15:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:08.978302 | orchestrator | 2026-01-03 01:15:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:08.978400 | orchestrator | 2026-01-03 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:12.022635 | orchestrator | 2026-01-03 01:15:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:12.024793 | orchestrator | 2026-01-03 01:15:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:12.025234 | orchestrator | 2026-01-03 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:15.070404 | orchestrator | 2026-01-03 01:15:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:15.072342 | orchestrator | 2026-01-03 01:15:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:15.073316 | orchestrator | 2026-01-03 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:18.111209 | orchestrator | 2026-01-03 01:15:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:18.111335 | orchestrator | 2026-01-03 01:15:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:18.111375 | orchestrator | 2026-01-03 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:21.162522 | orchestrator | 2026-01-03 01:15:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:21.163351 | orchestrator | 2026-01-03 01:15:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:21.163513 | orchestrator | 2026-01-03 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:24.208664 | orchestrator | 2026-01-03 01:15:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:24.210213 | orchestrator | 2026-01-03 01:15:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:24.210268 | orchestrator | 2026-01-03 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:27.258545 | orchestrator | 2026-01-03 01:15:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:27.259741 | orchestrator | 2026-01-03 01:15:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:27.259777 | orchestrator | 2026-01-03 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:30.304448 | orchestrator | 2026-01-03 01:15:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:30.306119 | orchestrator | 2026-01-03 01:15:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:30.306286 | orchestrator | 2026-01-03 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:33.353223 | orchestrator | 2026-01-03 01:15:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:33.355315 | orchestrator | 2026-01-03 01:15:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:33.355373 | orchestrator | 2026-01-03 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:36.400945 | orchestrator | 2026-01-03 01:15:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:36.402291 | orchestrator | 2026-01-03 01:15:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:36.402365 | orchestrator | 2026-01-03 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:39.450308 | orchestrator | 2026-01-03 01:15:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:39.452037 | orchestrator | 2026-01-03 01:15:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:39.452085 | orchestrator | 2026-01-03 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:42.498291 | orchestrator | 2026-01-03 01:15:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:42.500771 | orchestrator | 2026-01-03 01:15:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:42.500919 | orchestrator | 2026-01-03 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:45.543664 | orchestrator | 2026-01-03 01:15:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:45.544817 | orchestrator | 2026-01-03 01:15:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:45.544939 | orchestrator | 2026-01-03 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:48.591628 | orchestrator | 2026-01-03 01:15:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:48.594723 | orchestrator | 2026-01-03 01:15:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:48.595192 | orchestrator | 2026-01-03 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:51.644647 | orchestrator | 2026-01-03 01:15:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:51.646002 | orchestrator | 2026-01-03 01:15:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:51.646151 | orchestrator | 2026-01-03 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:54.686935 | orchestrator | 2026-01-03 01:15:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:54.689049 | orchestrator | 2026-01-03 01:15:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:54.689146 | orchestrator | 2026-01-03 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:57.737355 | orchestrator | 2026-01-03 01:15:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:15:57.740585 | orchestrator | 2026-01-03 01:15:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:15:57.740637 | orchestrator | 2026-01-03 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:00.782787 | orchestrator | 2026-01-03 01:16:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:00.784998 | orchestrator | 2026-01-03 01:16:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:00.785049 | orchestrator | 2026-01-03 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:03.830956 | orchestrator | 2026-01-03 01:16:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:03.833967 | orchestrator | 2026-01-03 01:16:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:03.834055 | orchestrator | 2026-01-03 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:06.880974 | orchestrator | 2026-01-03 01:16:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:06.883286 | orchestrator | 2026-01-03 01:16:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:06.883369 | orchestrator | 2026-01-03 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:09.928143 | orchestrator | 2026-01-03 01:16:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:09.929943 | orchestrator | 2026-01-03 01:16:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:09.929980 | orchestrator | 2026-01-03 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:12.973756 | orchestrator | 2026-01-03 01:16:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:12.974714 | orchestrator | 2026-01-03 01:16:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:12.974755 | orchestrator | 2026-01-03 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:16.019639 | orchestrator | 2026-01-03 01:16:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:16.020794 | orchestrator | 2026-01-03 01:16:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:16.020852 | orchestrator | 2026-01-03 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:19.066539 | orchestrator | 2026-01-03 01:16:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:19.067319 | orchestrator | 2026-01-03 01:16:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:19.067409 | orchestrator | 2026-01-03 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:22.115822 | orchestrator | 2026-01-03 01:16:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:22.119975 | orchestrator | 2026-01-03 01:16:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:22.120143 | orchestrator | 2026-01-03 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:25.157669 | orchestrator | 2026-01-03 01:16:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:25.159590 | orchestrator | 2026-01-03 01:16:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:25.159628 | orchestrator | 2026-01-03 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:28.206170 | orchestrator | 2026-01-03 01:16:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:28.208068 | orchestrator | 2026-01-03 01:16:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:28.208152 | orchestrator | 2026-01-03 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:31.253395 | orchestrator | 2026-01-03 01:16:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:31.255608 | orchestrator | 2026-01-03 01:16:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:31.255678 | orchestrator | 2026-01-03 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:34.295982 | orchestrator | 2026-01-03 01:16:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:34.298442 | orchestrator | 2026-01-03 01:16:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:34.298520 | orchestrator | 2026-01-03 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:37.340709 | orchestrator | 2026-01-03 01:16:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:37.342097 | orchestrator | 2026-01-03 01:16:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:37.342178 | orchestrator | 2026-01-03 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:40.380500 | orchestrator | 2026-01-03 01:16:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:40.381134 | orchestrator | 2026-01-03 01:16:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:40.381183 | orchestrator | 2026-01-03 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:43.424385 | orchestrator | 2026-01-03 01:16:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:43.425993 | orchestrator | 2026-01-03 01:16:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:43.426094 | orchestrator | 2026-01-03 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:46.471313 | orchestrator | 2026-01-03 01:16:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:46.473356 | orchestrator | 2026-01-03 01:16:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:46.473442 | orchestrator | 2026-01-03 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:49.522710 | orchestrator | 2026-01-03 01:16:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:49.524426 | orchestrator | 2026-01-03 01:16:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:49.524523 | orchestrator | 2026-01-03 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:52.573749 | orchestrator | 2026-01-03 01:16:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:52.575327 | orchestrator | 2026-01-03 01:16:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:52.575373 | orchestrator | 2026-01-03 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:55.620448 | orchestrator | 2026-01-03 01:16:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:55.622131 | orchestrator | 2026-01-03 01:16:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:55.622244 | orchestrator | 2026-01-03 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:58.666722 | orchestrator | 2026-01-03 01:16:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:16:58.669250 | orchestrator | 2026-01-03 01:16:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:16:58.669362 | orchestrator | 2026-01-03 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:01.710491 | orchestrator | 2026-01-03 01:17:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:01.712150 | orchestrator | 2026-01-03 01:17:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:01.712206 | orchestrator | 2026-01-03 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:04.754903 | orchestrator | 2026-01-03 01:17:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:04.756293 | orchestrator | 2026-01-03 01:17:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:04.756360 | orchestrator | 2026-01-03 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:07.810693 | orchestrator | 2026-01-03 01:17:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:07.812358 | orchestrator | 2026-01-03 01:17:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:07.812398 | orchestrator | 2026-01-03 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:10.851978 | orchestrator | 2026-01-03 01:17:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:10.853821 | orchestrator | 2026-01-03 01:17:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:10.853877 | orchestrator | 2026-01-03 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:13.901453 | orchestrator | 2026-01-03 01:17:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:13.904013 | orchestrator | 2026-01-03 01:17:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:13.904103 | orchestrator | 2026-01-03 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:16.949509 | orchestrator | 2026-01-03 01:17:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:16.950600 | orchestrator | 2026-01-03 01:17:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:16.950647 | orchestrator | 2026-01-03 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:19.995246 | orchestrator | 2026-01-03 01:17:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:19.995740 | orchestrator | 2026-01-03 01:17:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:19.995838 | orchestrator | 2026-01-03 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:23.041562 | orchestrator | 2026-01-03 01:17:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:23.043378 | orchestrator | 2026-01-03 01:17:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:23.043455 | orchestrator | 2026-01-03 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:26.082158 | orchestrator | 2026-01-03 01:17:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:26.083888 | orchestrator | 2026-01-03 01:17:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:26.083938 | orchestrator | 2026-01-03 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:29.131231 | orchestrator | 2026-01-03 01:17:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:29.133461 | orchestrator | 2026-01-03 01:17:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:29.133499 | orchestrator | 2026-01-03 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:32.175626 | orchestrator | 2026-01-03 01:17:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:32.177774 | orchestrator | 2026-01-03 01:17:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:32.177860 | orchestrator | 2026-01-03 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:35.222092 | orchestrator | 2026-01-03 01:17:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:35.223846 | orchestrator | 2026-01-03 01:17:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:35.223893 | orchestrator | 2026-01-03 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:38.273445 | orchestrator | 2026-01-03 01:17:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:38.275634 | orchestrator | 2026-01-03 01:17:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:38.275683 | orchestrator | 2026-01-03 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:41.322539 | orchestrator | 2026-01-03 01:17:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:41.324267 | orchestrator | 2026-01-03 01:17:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:41.324316 | orchestrator | 2026-01-03 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:44.369138 | orchestrator | 2026-01-03 01:17:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:44.370990 | orchestrator | 2026-01-03 01:17:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:44.371052 | orchestrator | 2026-01-03 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:47.415550 | orchestrator | 2026-01-03 01:17:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:47.416764 | orchestrator | 2026-01-03 01:17:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:47.416859 | orchestrator | 2026-01-03 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:50.460719 | orchestrator | 2026-01-03 01:17:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:50.462216 | orchestrator | 2026-01-03 01:17:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:50.462369 | orchestrator | 2026-01-03 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:53.506273 | orchestrator | 2026-01-03 01:17:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:53.507877 | orchestrator | 2026-01-03 01:17:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:53.507938 | orchestrator | 2026-01-03 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:56.556573 | orchestrator | 2026-01-03 01:17:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:56.559503 | orchestrator | 2026-01-03 01:17:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:56.559567 | orchestrator | 2026-01-03 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:59.607150 | orchestrator | 2026-01-03 01:17:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:17:59.608533 | orchestrator | 2026-01-03 01:17:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:17:59.608596 | orchestrator | 2026-01-03 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:02.650835 | orchestrator | 2026-01-03 01:18:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:02.653115 | orchestrator | 2026-01-03 01:18:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:02.653173 | orchestrator | 2026-01-03 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:05.701583 | orchestrator | 2026-01-03 01:18:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:05.703713 | orchestrator | 2026-01-03 01:18:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:05.703923 | orchestrator | 2026-01-03 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:08.749408 | orchestrator | 2026-01-03 01:18:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:08.751653 | orchestrator | 2026-01-03 01:18:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:08.751815 | orchestrator | 2026-01-03 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:11.800495 | orchestrator | 2026-01-03 01:18:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:11.802192 | orchestrator | 2026-01-03 01:18:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:11.802245 | orchestrator | 2026-01-03 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:14.846282 | orchestrator | 2026-01-03 01:18:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:14.848398 | orchestrator | 2026-01-03 01:18:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:14.848500 | orchestrator | 2026-01-03 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:17.893830 | orchestrator | 2026-01-03 01:18:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:17.895557 | orchestrator | 2026-01-03 01:18:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:17.895596 | orchestrator | 2026-01-03 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:20.939085 | orchestrator | 2026-01-03 01:18:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:20.942172 | orchestrator | 2026-01-03 01:18:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:20.942252 | orchestrator | 2026-01-03 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:23.985454 | orchestrator | 2026-01-03 01:18:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:23.987366 | orchestrator | 2026-01-03 01:18:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:23.987423 | orchestrator | 2026-01-03 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:27.040643 | orchestrator | 2026-01-03 01:18:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:27.041616 | orchestrator | 2026-01-03 01:18:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:27.041889 | orchestrator | 2026-01-03 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:30.086489 | orchestrator | 2026-01-03 01:18:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:30.088042 | orchestrator | 2026-01-03 01:18:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:30.088101 | orchestrator | 2026-01-03 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:33.128500 | orchestrator | 2026-01-03 01:18:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:33.129791 | orchestrator | 2026-01-03 01:18:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:33.129855 | orchestrator | 2026-01-03 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:36.172740 | orchestrator | 2026-01-03 01:18:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:36.174124 | orchestrator | 2026-01-03 01:18:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:36.174435 | orchestrator | 2026-01-03 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:39.217123 | orchestrator | 2026-01-03 01:18:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:39.219604 | orchestrator | 2026-01-03 01:18:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:39.219673 | orchestrator | 2026-01-03 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:42.266110 | orchestrator | 2026-01-03 01:18:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:42.268526 | orchestrator | 2026-01-03 01:18:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:42.268680 | orchestrator | 2026-01-03 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:45.318658 | orchestrator | 2026-01-03 01:18:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:45.319384 | orchestrator | 2026-01-03 01:18:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:45.319436 | orchestrator | 2026-01-03 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:48.366973 | orchestrator | 2026-01-03 01:18:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:48.368571 | orchestrator | 2026-01-03 01:18:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:48.368819 | orchestrator | 2026-01-03 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:51.414913 | orchestrator | 2026-01-03 01:18:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:51.418962 | orchestrator | 2026-01-03 01:18:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:51.419472 | orchestrator | 2026-01-03 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:54.464577 | orchestrator | 2026-01-03 01:18:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:54.466700 | orchestrator | 2026-01-03 01:18:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:54.466863 | orchestrator | 2026-01-03 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:57.509643 | orchestrator | 2026-01-03 01:18:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:18:57.511624 | orchestrator | 2026-01-03 01:18:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:18:57.511678 | orchestrator | 2026-01-03 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:00.560238 | orchestrator | 2026-01-03 01:19:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:00.566785 | orchestrator | 2026-01-03 01:19:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:00.567252 | orchestrator | 2026-01-03 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:03.609815 | orchestrator | 2026-01-03 01:19:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:03.611952 | orchestrator | 2026-01-03 01:19:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:03.612019 | orchestrator | 2026-01-03 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:06.657475 | orchestrator | 2026-01-03 01:19:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:06.659174 | orchestrator | 2026-01-03 01:19:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:06.659236 | orchestrator | 2026-01-03 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:09.705567 | orchestrator | 2026-01-03 01:19:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:09.707430 | orchestrator | 2026-01-03 01:19:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:09.707495 | orchestrator | 2026-01-03 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:12.749823 | orchestrator | 2026-01-03 01:19:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:12.751061 | orchestrator | 2026-01-03 01:19:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:12.751196 | orchestrator | 2026-01-03 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:15.798710 | orchestrator | 2026-01-03 01:19:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:15.799822 | orchestrator | 2026-01-03 01:19:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:15.799968 | orchestrator | 2026-01-03 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:18.846121 | orchestrator | 2026-01-03 01:19:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:18.847895 | orchestrator | 2026-01-03 01:19:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:18.847974 | orchestrator | 2026-01-03 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:21.896427 | orchestrator | 2026-01-03 01:19:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:21.898834 | orchestrator | 2026-01-03 01:19:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:21.899801 | orchestrator | 2026-01-03 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:24.945094 | orchestrator | 2026-01-03 01:19:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:24.946345 | orchestrator | 2026-01-03 01:19:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:24.946374 | orchestrator | 2026-01-03 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:27.993390 | orchestrator | 2026-01-03 01:19:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:27.995699 | orchestrator | 2026-01-03 01:19:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:27.995775 | orchestrator | 2026-01-03 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:31.042140 | orchestrator | 2026-01-03 01:19:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:31.043739 | orchestrator | 2026-01-03 01:19:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:31.043821 | orchestrator | 2026-01-03 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:34.093617 | orchestrator | 2026-01-03 01:19:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:34.095486 | orchestrator | 2026-01-03 01:19:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:34.095570 | orchestrator | 2026-01-03 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:37.139044 | orchestrator | 2026-01-03 01:19:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:37.141001 | orchestrator | 2026-01-03 01:19:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:37.141060 | orchestrator | 2026-01-03 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:40.186696 | orchestrator | 2026-01-03 01:19:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:40.188536 | orchestrator | 2026-01-03 01:19:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:40.188697 | orchestrator | 2026-01-03 01:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:43.235742 | orchestrator | 2026-01-03 01:19:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:43.238167 | orchestrator | 2026-01-03 01:19:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:43.238222 | orchestrator | 2026-01-03 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:46.283592 | orchestrator | 2026-01-03 01:19:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:46.285492 | orchestrator | 2026-01-03 01:19:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:46.285592 | orchestrator | 2026-01-03 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:49.329983 | orchestrator | 2026-01-03 01:19:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:49.332328 | orchestrator | 2026-01-03 01:19:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:49.332409 | orchestrator | 2026-01-03 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:52.380531 | orchestrator | 2026-01-03 01:19:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:52.383006 | orchestrator | 2026-01-03 01:19:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:52.383075 | orchestrator | 2026-01-03 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:55.430317 | orchestrator | 2026-01-03 01:19:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:55.431207 | orchestrator | 2026-01-03 01:19:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:55.431282 | orchestrator | 2026-01-03 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:58.474776 | orchestrator | 2026-01-03 01:19:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:19:58.477361 | orchestrator | 2026-01-03 01:19:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:19:58.477420 | orchestrator | 2026-01-03 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:01.523430 | orchestrator | 2026-01-03 01:20:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:01.525931 | orchestrator | 2026-01-03 01:20:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:01.526012 | orchestrator | 2026-01-03 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:04.573338 | orchestrator | 2026-01-03 01:20:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:04.575280 | orchestrator | 2026-01-03 01:20:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:04.575387 | orchestrator | 2026-01-03 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:07.619308 | orchestrator | 2026-01-03 01:20:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:07.620212 | orchestrator | 2026-01-03 01:20:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:07.620469 | orchestrator | 2026-01-03 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:10.664745 | orchestrator | 2026-01-03 01:20:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:10.667549 | orchestrator | 2026-01-03 01:20:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:10.667673 | orchestrator | 2026-01-03 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:13.707789 | orchestrator | 2026-01-03 01:20:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:13.709892 | orchestrator | 2026-01-03 01:20:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:13.709958 | orchestrator | 2026-01-03 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:16.757871 | orchestrator | 2026-01-03 01:20:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:16.759983 | orchestrator | 2026-01-03 01:20:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:16.760065 | orchestrator | 2026-01-03 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:19.809338 | orchestrator | 2026-01-03 01:20:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:19.812811 | orchestrator | 2026-01-03 01:20:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:19.813019 | orchestrator | 2026-01-03 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:22.859786 | orchestrator | 2026-01-03 01:20:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:22.861771 | orchestrator | 2026-01-03 01:20:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:22.861886 | orchestrator | 2026-01-03 01:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:25.910473 | orchestrator | 2026-01-03 01:20:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:25.914066 | orchestrator | 2026-01-03 01:20:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:25.914187 | orchestrator | 2026-01-03 01:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:28.960345 | orchestrator | 2026-01-03 01:20:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:28.962423 | orchestrator | 2026-01-03 01:20:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:28.962964 | orchestrator | 2026-01-03 01:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:32.010874 | orchestrator | 2026-01-03 01:20:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:32.012518 | orchestrator | 2026-01-03 01:20:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:32.012578 | orchestrator | 2026-01-03 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:35.064391 | orchestrator | 2026-01-03 01:20:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:35.067163 | orchestrator | 2026-01-03 01:20:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:35.067233 | orchestrator | 2026-01-03 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:38.114196 | orchestrator | 2026-01-03 01:20:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:38.116303 | orchestrator | 2026-01-03 01:20:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:38.116364 | orchestrator | 2026-01-03 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:41.171974 | orchestrator | 2026-01-03 01:20:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:41.173578 | orchestrator | 2026-01-03 01:20:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:41.173649 | orchestrator | 2026-01-03 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:44.224440 | orchestrator | 2026-01-03 01:20:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:44.226207 | orchestrator | 2026-01-03 01:20:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:44.226280 | orchestrator | 2026-01-03 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:47.275913 | orchestrator | 2026-01-03 01:20:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:47.277100 | orchestrator | 2026-01-03 01:20:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:47.277128 | orchestrator | 2026-01-03 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:50.321398 | orchestrator | 2026-01-03 01:20:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:50.322610 | orchestrator | 2026-01-03 01:20:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:50.322708 | orchestrator | 2026-01-03 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:53.367467 | orchestrator | 2026-01-03 01:20:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:53.369456 | orchestrator | 2026-01-03 01:20:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:53.369522 | orchestrator | 2026-01-03 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:56.416464 | orchestrator | 2026-01-03 01:20:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:56.417636 | orchestrator | 2026-01-03 01:20:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:56.417737 | orchestrator | 2026-01-03 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:59.461780 | orchestrator | 2026-01-03 01:20:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:20:59.463110 | orchestrator | 2026-01-03 01:20:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:20:59.463161 | orchestrator | 2026-01-03 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:02.507241 | orchestrator | 2026-01-03 01:21:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:02.509406 | orchestrator | 2026-01-03 01:21:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:02.509478 | orchestrator | 2026-01-03 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:05.550768 | orchestrator | 2026-01-03 01:21:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:05.551398 | orchestrator | 2026-01-03 01:21:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:05.551424 | orchestrator | 2026-01-03 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:08.598399 | orchestrator | 2026-01-03 01:21:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:08.600563 | orchestrator | 2026-01-03 01:21:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:08.600745 | orchestrator | 2026-01-03 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:11.646978 | orchestrator | 2026-01-03 01:21:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:11.648576 | orchestrator | 2026-01-03 01:21:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:11.648660 | orchestrator | 2026-01-03 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:14.697883 | orchestrator | 2026-01-03 01:21:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:14.699230 | orchestrator | 2026-01-03 01:21:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:14.699277 | orchestrator | 2026-01-03 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:17.751487 | orchestrator | 2026-01-03 01:21:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:17.753566 | orchestrator | 2026-01-03 01:21:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:17.753613 | orchestrator | 2026-01-03 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:20.799013 | orchestrator | 2026-01-03 01:21:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:20.800479 | orchestrator | 2026-01-03 01:21:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:20.800541 | orchestrator | 2026-01-03 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:23.843621 | orchestrator | 2026-01-03 01:21:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:23.845418 | orchestrator | 2026-01-03 01:21:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:23.845476 | orchestrator | 2026-01-03 01:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:26.895695 | orchestrator | 2026-01-03 01:21:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:26.897582 | orchestrator | 2026-01-03 01:21:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:26.897648 | orchestrator | 2026-01-03 01:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:29.943125 | orchestrator | 2026-01-03 01:21:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:29.944584 | orchestrator | 2026-01-03 01:21:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:29.944726 | orchestrator | 2026-01-03 01:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:32.994342 | orchestrator | 2026-01-03 01:21:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:32.996213 | orchestrator | 2026-01-03 01:21:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:32.996254 | orchestrator | 2026-01-03 01:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:36.041872 | orchestrator | 2026-01-03 01:21:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:36.044264 | orchestrator | 2026-01-03 01:21:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:36.044333 | orchestrator | 2026-01-03 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:39.089439 | orchestrator | 2026-01-03 01:21:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:39.091457 | orchestrator | 2026-01-03 01:21:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:39.091510 | orchestrator | 2026-01-03 01:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:42.136909 | orchestrator | 2026-01-03 01:21:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:42.138776 | orchestrator | 2026-01-03 01:21:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:42.138883 | orchestrator | 2026-01-03 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:45.182911 | orchestrator | 2026-01-03 01:21:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:45.184660 | orchestrator | 2026-01-03 01:21:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:45.184738 | orchestrator | 2026-01-03 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:48.232581 | orchestrator | 2026-01-03 01:21:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:48.234061 | orchestrator | 2026-01-03 01:21:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:48.234131 | orchestrator | 2026-01-03 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:51.276127 | orchestrator | 2026-01-03 01:21:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:51.278504 | orchestrator | 2026-01-03 01:21:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:51.278581 | orchestrator | 2026-01-03 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:54.320246 | orchestrator | 2026-01-03 01:21:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:54.322253 | orchestrator | 2026-01-03 01:21:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:54.322585 | orchestrator | 2026-01-03 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:57.367844 | orchestrator | 2026-01-03 01:21:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:21:57.369189 | orchestrator | 2026-01-03 01:21:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:21:57.369239 | orchestrator | 2026-01-03 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:00.415193 | orchestrator | 2026-01-03 01:22:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:00.417303 | orchestrator | 2026-01-03 01:22:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:00.417405 | orchestrator | 2026-01-03 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:03.471116 | orchestrator | 2026-01-03 01:22:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:03.474195 | orchestrator | 2026-01-03 01:22:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:03.474331 | orchestrator | 2026-01-03 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:06.514349 | orchestrator | 2026-01-03 01:22:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:06.515391 | orchestrator | 2026-01-03 01:22:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:06.515445 | orchestrator | 2026-01-03 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:09.563170 | orchestrator | 2026-01-03 01:22:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:09.565439 | orchestrator | 2026-01-03 01:22:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:09.565536 | orchestrator | 2026-01-03 01:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:12.607909 | orchestrator | 2026-01-03 01:22:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:12.609836 | orchestrator | 2026-01-03 01:22:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:12.609879 | orchestrator | 2026-01-03 01:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:15.654359 | orchestrator | 2026-01-03 01:22:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:15.655898 | orchestrator | 2026-01-03 01:22:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:15.655983 | orchestrator | 2026-01-03 01:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:18.703837 | orchestrator | 2026-01-03 01:22:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:18.705842 | orchestrator | 2026-01-03 01:22:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:18.705936 | orchestrator | 2026-01-03 01:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:21.752877 | orchestrator | 2026-01-03 01:22:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:21.755257 | orchestrator | 2026-01-03 01:22:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:21.755340 | orchestrator | 2026-01-03 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:24.800746 | orchestrator | 2026-01-03 01:22:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:24.802440 | orchestrator | 2026-01-03 01:22:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:24.802523 | orchestrator | 2026-01-03 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:27.849920 | orchestrator | 2026-01-03 01:22:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:27.851933 | orchestrator | 2026-01-03 01:22:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:27.851975 | orchestrator | 2026-01-03 01:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:30.893329 | orchestrator | 2026-01-03 01:22:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:30.894239 | orchestrator | 2026-01-03 01:22:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:30.894269 | orchestrator | 2026-01-03 01:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:33.943836 | orchestrator | 2026-01-03 01:22:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:33.945430 | orchestrator | 2026-01-03 01:22:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:33.945464 | orchestrator | 2026-01-03 01:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:36.987856 | orchestrator | 2026-01-03 01:22:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:36.989423 | orchestrator | 2026-01-03 01:22:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:36.989494 | orchestrator | 2026-01-03 01:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:40.045890 | orchestrator | 2026-01-03 01:22:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:40.047291 | orchestrator | 2026-01-03 01:22:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:40.047329 | orchestrator | 2026-01-03 01:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:43.091331 | orchestrator | 2026-01-03 01:22:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:43.092127 | orchestrator | 2026-01-03 01:22:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:43.092170 | orchestrator | 2026-01-03 01:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:46.138901 | orchestrator | 2026-01-03 01:22:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:46.140341 | orchestrator | 2026-01-03 01:22:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:46.140446 | orchestrator | 2026-01-03 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:49.189425 | orchestrator | 2026-01-03 01:22:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:49.190654 | orchestrator | 2026-01-03 01:22:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:49.191007 | orchestrator | 2026-01-03 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:52.236442 | orchestrator | 2026-01-03 01:22:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:52.238394 | orchestrator | 2026-01-03 01:22:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:52.238517 | orchestrator | 2026-01-03 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:55.282068 | orchestrator | 2026-01-03 01:22:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:55.285089 | orchestrator | 2026-01-03 01:22:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:55.285161 | orchestrator | 2026-01-03 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:58.329058 | orchestrator | 2026-01-03 01:22:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:22:58.331186 | orchestrator | 2026-01-03 01:22:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:22:58.331253 | orchestrator | 2026-01-03 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:01.371878 | orchestrator | 2026-01-03 01:23:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:01.372486 | orchestrator | 2026-01-03 01:23:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:01.372520 | orchestrator | 2026-01-03 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:04.418799 | orchestrator | 2026-01-03 01:23:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:04.419486 | orchestrator | 2026-01-03 01:23:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:04.419520 | orchestrator | 2026-01-03 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:07.461519 | orchestrator | 2026-01-03 01:23:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:07.463579 | orchestrator | 2026-01-03 01:23:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:07.463649 | orchestrator | 2026-01-03 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:10.504485 | orchestrator | 2026-01-03 01:23:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:10.506776 | orchestrator | 2026-01-03 01:23:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:10.506990 | orchestrator | 2026-01-03 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:13.548688 | orchestrator | 2026-01-03 01:23:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:13.550936 | orchestrator | 2026-01-03 01:23:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:13.550991 | orchestrator | 2026-01-03 01:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:16.596798 | orchestrator | 2026-01-03 01:23:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:16.599930 | orchestrator | 2026-01-03 01:23:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:16.600009 | orchestrator | 2026-01-03 01:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:19.648897 | orchestrator | 2026-01-03 01:23:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:19.652668 | orchestrator | 2026-01-03 01:23:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:19.652858 | orchestrator | 2026-01-03 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:22.696072 | orchestrator | 2026-01-03 01:23:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:22.698481 | orchestrator | 2026-01-03 01:23:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:22.698578 | orchestrator | 2026-01-03 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:25.741326 | orchestrator | 2026-01-03 01:23:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:25.742946 | orchestrator | 2026-01-03 01:23:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:25.743043 | orchestrator | 2026-01-03 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:28.790053 | orchestrator | 2026-01-03 01:23:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:28.791816 | orchestrator | 2026-01-03 01:23:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:28.791917 | orchestrator | 2026-01-03 01:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:31.831792 | orchestrator | 2026-01-03 01:23:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:31.833871 | orchestrator | 2026-01-03 01:23:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:31.833921 | orchestrator | 2026-01-03 01:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:34.879321 | orchestrator | 2026-01-03 01:23:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:34.880314 | orchestrator | 2026-01-03 01:23:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:34.880390 | orchestrator | 2026-01-03 01:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:37.928970 | orchestrator | 2026-01-03 01:23:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:37.930388 | orchestrator | 2026-01-03 01:23:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:37.930441 | orchestrator | 2026-01-03 01:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:40.980337 | orchestrator | 2026-01-03 01:23:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:40.983260 | orchestrator | 2026-01-03 01:23:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:40.983318 | orchestrator | 2026-01-03 01:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:44.048634 | orchestrator | 2026-01-03 01:23:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:44.050391 | orchestrator | 2026-01-03 01:23:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:44.050550 | orchestrator | 2026-01-03 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:47.097426 | orchestrator | 2026-01-03 01:23:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:47.099669 | orchestrator | 2026-01-03 01:23:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:47.099752 | orchestrator | 2026-01-03 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:50.147069 | orchestrator | 2026-01-03 01:23:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:50.148398 | orchestrator | 2026-01-03 01:23:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:50.148461 | orchestrator | 2026-01-03 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:53.195263 | orchestrator | 2026-01-03 01:23:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:53.197418 | orchestrator | 2026-01-03 01:23:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:53.197464 | orchestrator | 2026-01-03 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:56.237094 | orchestrator | 2026-01-03 01:23:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:56.238731 | orchestrator | 2026-01-03 01:23:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:56.238888 | orchestrator | 2026-01-03 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:59.282278 | orchestrator | 2026-01-03 01:23:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:23:59.284256 | orchestrator | 2026-01-03 01:23:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:23:59.284369 | orchestrator | 2026-01-03 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:02.332391 | orchestrator | 2026-01-03 01:24:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:02.334473 | orchestrator | 2026-01-03 01:24:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:02.334651 | orchestrator | 2026-01-03 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:05.382062 | orchestrator | 2026-01-03 01:24:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:05.383483 | orchestrator | 2026-01-03 01:24:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:05.384130 | orchestrator | 2026-01-03 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:08.430877 | orchestrator | 2026-01-03 01:24:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:08.432521 | orchestrator | 2026-01-03 01:24:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:08.432569 | orchestrator | 2026-01-03 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:11.477003 | orchestrator | 2026-01-03 01:24:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:11.478865 | orchestrator | 2026-01-03 01:24:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:11.479079 | orchestrator | 2026-01-03 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:14.526709 | orchestrator | 2026-01-03 01:24:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:14.531902 | orchestrator | 2026-01-03 01:24:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:14.531978 | orchestrator | 2026-01-03 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:17.573060 | orchestrator | 2026-01-03 01:24:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:17.576301 | orchestrator | 2026-01-03 01:24:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:17.576861 | orchestrator | 2026-01-03 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:20.626435 | orchestrator | 2026-01-03 01:24:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:20.628788 | orchestrator | 2026-01-03 01:24:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:20.628849 | orchestrator | 2026-01-03 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:23.671932 | orchestrator | 2026-01-03 01:24:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:23.674561 | orchestrator | 2026-01-03 01:24:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:23.674704 | orchestrator | 2026-01-03 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:26.722365 | orchestrator | 2026-01-03 01:24:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:26.724710 | orchestrator | 2026-01-03 01:24:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:26.724761 | orchestrator | 2026-01-03 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:29.765622 | orchestrator | 2026-01-03 01:24:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:29.767816 | orchestrator | 2026-01-03 01:24:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:29.767882 | orchestrator | 2026-01-03 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:32.806308 | orchestrator | 2026-01-03 01:24:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:32.808652 | orchestrator | 2026-01-03 01:24:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:32.808745 | orchestrator | 2026-01-03 01:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:35.858992 | orchestrator | 2026-01-03 01:24:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:35.861245 | orchestrator | 2026-01-03 01:24:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:35.861324 | orchestrator | 2026-01-03 01:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:38.910592 | orchestrator | 2026-01-03 01:24:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:38.912549 | orchestrator | 2026-01-03 01:24:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:38.912737 | orchestrator | 2026-01-03 01:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:41.959859 | orchestrator | 2026-01-03 01:24:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:41.964384 | orchestrator | 2026-01-03 01:24:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:41.964447 | orchestrator | 2026-01-03 01:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:45.011060 | orchestrator | 2026-01-03 01:24:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:45.014924 | orchestrator | 2026-01-03 01:24:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:45.014991 | orchestrator | 2026-01-03 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:48.059523 | orchestrator | 2026-01-03 01:24:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:48.060363 | orchestrator | 2026-01-03 01:24:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:48.060417 | orchestrator | 2026-01-03 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:51.111265 | orchestrator | 2026-01-03 01:24:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:51.113291 | orchestrator | 2026-01-03 01:24:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:51.113364 | orchestrator | 2026-01-03 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:54.157017 | orchestrator | 2026-01-03 01:24:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:54.158894 | orchestrator | 2026-01-03 01:24:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:54.159418 | orchestrator | 2026-01-03 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:57.209922 | orchestrator | 2026-01-03 01:24:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:24:57.211531 | orchestrator | 2026-01-03 01:24:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:24:57.211619 | orchestrator | 2026-01-03 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:00.256167 | orchestrator | 2026-01-03 01:25:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:00.256749 | orchestrator | 2026-01-03 01:25:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:00.256794 | orchestrator | 2026-01-03 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:03.301950 | orchestrator | 2026-01-03 01:25:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:03.304408 | orchestrator | 2026-01-03 01:25:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:03.304498 | orchestrator | 2026-01-03 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:06.350049 | orchestrator | 2026-01-03 01:25:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:06.350126 | orchestrator | 2026-01-03 01:25:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:06.350196 | orchestrator | 2026-01-03 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:09.399941 | orchestrator | 2026-01-03 01:25:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:09.401527 | orchestrator | 2026-01-03 01:25:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:09.401595 | orchestrator | 2026-01-03 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:12.448645 | orchestrator | 2026-01-03 01:25:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:12.450611 | orchestrator | 2026-01-03 01:25:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:12.450670 | orchestrator | 2026-01-03 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:15.495900 | orchestrator | 2026-01-03 01:25:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:15.497505 | orchestrator | 2026-01-03 01:25:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:15.497674 | orchestrator | 2026-01-03 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:18.541095 | orchestrator | 2026-01-03 01:25:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:18.541979 | orchestrator | 2026-01-03 01:25:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:18.542190 | orchestrator | 2026-01-03 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:21.587072 | orchestrator | 2026-01-03 01:25:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:21.589780 | orchestrator | 2026-01-03 01:25:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:21.590388 | orchestrator | 2026-01-03 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:24.633847 | orchestrator | 2026-01-03 01:25:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:24.635133 | orchestrator | 2026-01-03 01:25:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:24.635170 | orchestrator | 2026-01-03 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:27.676975 | orchestrator | 2026-01-03 01:25:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:27.679289 | orchestrator | 2026-01-03 01:25:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:27.679413 | orchestrator | 2026-01-03 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:30.722106 | orchestrator | 2026-01-03 01:25:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:30.724250 | orchestrator | 2026-01-03 01:25:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:30.724301 | orchestrator | 2026-01-03 01:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:33.766334 | orchestrator | 2026-01-03 01:25:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:33.768918 | orchestrator | 2026-01-03 01:25:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:33.768995 | orchestrator | 2026-01-03 01:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:36.815520 | orchestrator | 2026-01-03 01:25:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:36.817253 | orchestrator | 2026-01-03 01:25:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:36.817513 | orchestrator | 2026-01-03 01:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:39.864304 | orchestrator | 2026-01-03 01:25:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:39.866086 | orchestrator | 2026-01-03 01:25:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:39.866139 | orchestrator | 2026-01-03 01:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:42.915115 | orchestrator | 2026-01-03 01:25:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:42.916828 | orchestrator | 2026-01-03 01:25:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:42.916895 | orchestrator | 2026-01-03 01:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:45.961200 | orchestrator | 2026-01-03 01:25:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:45.963908 | orchestrator | 2026-01-03 01:25:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:45.964101 | orchestrator | 2026-01-03 01:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:49.015500 | orchestrator | 2026-01-03 01:25:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:49.017402 | orchestrator | 2026-01-03 01:25:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:49.017480 | orchestrator | 2026-01-03 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:52.065720 | orchestrator | 2026-01-03 01:25:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:52.067927 | orchestrator | 2026-01-03 01:25:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:52.068001 | orchestrator | 2026-01-03 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:55.100311 | orchestrator | 2026-01-03 01:25:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:55.101177 | orchestrator | 2026-01-03 01:25:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:55.101229 | orchestrator | 2026-01-03 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:58.147486 | orchestrator | 2026-01-03 01:25:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:25:58.149370 | orchestrator | 2026-01-03 01:25:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:25:58.149547 | orchestrator | 2026-01-03 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:01.187903 | orchestrator | 2026-01-03 01:26:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:01.188586 | orchestrator | 2026-01-03 01:26:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:01.188638 | orchestrator | 2026-01-03 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:04.224795 | orchestrator | 2026-01-03 01:26:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:04.226448 | orchestrator | 2026-01-03 01:26:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:04.226530 | orchestrator | 2026-01-03 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:07.274056 | orchestrator | 2026-01-03 01:26:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:07.275788 | orchestrator | 2026-01-03 01:26:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:07.275867 | orchestrator | 2026-01-03 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:10.323568 | orchestrator | 2026-01-03 01:26:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:10.325247 | orchestrator | 2026-01-03 01:26:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:10.325310 | orchestrator | 2026-01-03 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:13.375045 | orchestrator | 2026-01-03 01:26:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:13.375122 | orchestrator | 2026-01-03 01:26:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:13.375131 | orchestrator | 2026-01-03 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:16.422788 | orchestrator | 2026-01-03 01:26:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:16.424593 | orchestrator | 2026-01-03 01:26:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:16.424778 | orchestrator | 2026-01-03 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:19.470116 | orchestrator | 2026-01-03 01:26:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:19.471892 | orchestrator | 2026-01-03 01:26:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:19.471980 | orchestrator | 2026-01-03 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:22.516858 | orchestrator | 2026-01-03 01:26:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:22.518951 | orchestrator | 2026-01-03 01:26:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:22.519056 | orchestrator | 2026-01-03 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:25.564776 | orchestrator | 2026-01-03 01:26:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:25.566338 | orchestrator | 2026-01-03 01:26:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:25.566431 | orchestrator | 2026-01-03 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:28.610605 | orchestrator | 2026-01-03 01:26:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:28.612571 | orchestrator | 2026-01-03 01:26:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:28.612624 | orchestrator | 2026-01-03 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:31.656461 | orchestrator | 2026-01-03 01:26:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:31.658784 | orchestrator | 2026-01-03 01:26:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:31.658840 | orchestrator | 2026-01-03 01:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:34.706429 | orchestrator | 2026-01-03 01:26:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:34.708193 | orchestrator | 2026-01-03 01:26:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:34.708254 | orchestrator | 2026-01-03 01:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:37.753878 | orchestrator | 2026-01-03 01:26:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:37.756754 | orchestrator | 2026-01-03 01:26:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:37.756818 | orchestrator | 2026-01-03 01:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:40.802994 | orchestrator | 2026-01-03 01:26:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:40.804595 | orchestrator | 2026-01-03 01:26:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:40.804653 | orchestrator | 2026-01-03 01:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:43.846884 | orchestrator | 2026-01-03 01:26:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:43.848601 | orchestrator | 2026-01-03 01:26:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:43.848701 | orchestrator | 2026-01-03 01:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:46.895051 | orchestrator | 2026-01-03 01:26:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:46.896905 | orchestrator | 2026-01-03 01:26:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:46.896961 | orchestrator | 2026-01-03 01:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:49.940657 | orchestrator | 2026-01-03 01:26:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:49.943068 | orchestrator | 2026-01-03 01:26:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:49.943299 | orchestrator | 2026-01-03 01:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:52.992491 | orchestrator | 2026-01-03 01:26:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:52.994511 | orchestrator | 2026-01-03 01:26:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:52.994602 | orchestrator | 2026-01-03 01:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:56.042549 | orchestrator | 2026-01-03 01:26:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:56.044782 | orchestrator | 2026-01-03 01:26:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:56.044927 | orchestrator | 2026-01-03 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:59.086447 | orchestrator | 2026-01-03 01:26:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:26:59.087889 | orchestrator | 2026-01-03 01:26:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:26:59.087923 | orchestrator | 2026-01-03 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:02.126297 | orchestrator | 2026-01-03 01:27:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:02.128108 | orchestrator | 2026-01-03 01:27:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:02.128157 | orchestrator | 2026-01-03 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:05.167979 | orchestrator | 2026-01-03 01:27:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:05.171057 | orchestrator | 2026-01-03 01:27:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:05.171135 | orchestrator | 2026-01-03 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:08.216648 | orchestrator | 2026-01-03 01:27:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:08.217866 | orchestrator | 2026-01-03 01:27:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:08.218101 | orchestrator | 2026-01-03 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:11.268452 | orchestrator | 2026-01-03 01:27:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:11.270076 | orchestrator | 2026-01-03 01:27:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:11.270135 | orchestrator | 2026-01-03 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:14.317863 | orchestrator | 2026-01-03 01:27:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:14.319292 | orchestrator | 2026-01-03 01:27:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:14.319409 | orchestrator | 2026-01-03 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:17.363859 | orchestrator | 2026-01-03 01:27:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:17.366432 | orchestrator | 2026-01-03 01:27:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:17.366498 | orchestrator | 2026-01-03 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:20.414607 | orchestrator | 2026-01-03 01:27:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:20.416147 | orchestrator | 2026-01-03 01:27:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:20.416237 | orchestrator | 2026-01-03 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:23.460908 | orchestrator | 2026-01-03 01:27:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:23.463087 | orchestrator | 2026-01-03 01:27:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:23.463239 | orchestrator | 2026-01-03 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:26.506003 | orchestrator | 2026-01-03 01:27:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:26.508476 | orchestrator | 2026-01-03 01:27:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:26.508543 | orchestrator | 2026-01-03 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:29.553451 | orchestrator | 2026-01-03 01:27:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:29.555372 | orchestrator | 2026-01-03 01:27:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:29.555500 | orchestrator | 2026-01-03 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:32.599193 | orchestrator | 2026-01-03 01:27:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:32.601199 | orchestrator | 2026-01-03 01:27:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:32.601342 | orchestrator | 2026-01-03 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:35.647602 | orchestrator | 2026-01-03 01:27:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:35.648937 | orchestrator | 2026-01-03 01:27:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:35.649048 | orchestrator | 2026-01-03 01:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:38.695414 | orchestrator | 2026-01-03 01:27:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:38.698351 | orchestrator | 2026-01-03 01:27:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:38.698422 | orchestrator | 2026-01-03 01:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:41.749169 | orchestrator | 2026-01-03 01:27:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:41.751555 | orchestrator | 2026-01-03 01:27:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:41.751642 | orchestrator | 2026-01-03 01:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:44.795902 | orchestrator | 2026-01-03 01:27:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:44.798190 | orchestrator | 2026-01-03 01:27:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:44.798272 | orchestrator | 2026-01-03 01:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:47.844017 | orchestrator | 2026-01-03 01:27:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:47.846051 | orchestrator | 2026-01-03 01:27:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:47.846089 | orchestrator | 2026-01-03 01:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:50.892269 | orchestrator | 2026-01-03 01:27:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:50.894059 | orchestrator | 2026-01-03 01:27:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:50.894123 | orchestrator | 2026-01-03 01:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:53.935554 | orchestrator | 2026-01-03 01:27:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:53.937456 | orchestrator | 2026-01-03 01:27:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:53.937514 | orchestrator | 2026-01-03 01:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:56.978497 | orchestrator | 2026-01-03 01:27:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:27:56.980657 | orchestrator | 2026-01-03 01:27:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:27:56.980739 | orchestrator | 2026-01-03 01:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:00.018170 | orchestrator | 2026-01-03 01:28:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:00.019778 | orchestrator | 2026-01-03 01:28:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:00.019854 | orchestrator | 2026-01-03 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:03.062054 | orchestrator | 2026-01-03 01:28:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:03.063512 | orchestrator | 2026-01-03 01:28:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:03.064043 | orchestrator | 2026-01-03 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:06.109873 | orchestrator | 2026-01-03 01:28:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:06.111951 | orchestrator | 2026-01-03 01:28:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:06.112020 | orchestrator | 2026-01-03 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:09.154273 | orchestrator | 2026-01-03 01:28:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:09.157096 | orchestrator | 2026-01-03 01:28:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:09.157166 | orchestrator | 2026-01-03 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:12.200778 | orchestrator | 2026-01-03 01:28:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:12.201688 | orchestrator | 2026-01-03 01:28:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:12.201773 | orchestrator | 2026-01-03 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:15.248364 | orchestrator | 2026-01-03 01:28:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:15.249628 | orchestrator | 2026-01-03 01:28:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:15.249680 | orchestrator | 2026-01-03 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:18.300502 | orchestrator | 2026-01-03 01:28:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:18.302946 | orchestrator | 2026-01-03 01:28:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:18.303034 | orchestrator | 2026-01-03 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:21.349738 | orchestrator | 2026-01-03 01:28:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:21.352143 | orchestrator | 2026-01-03 01:28:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:21.352210 | orchestrator | 2026-01-03 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:24.399600 | orchestrator | 2026-01-03 01:28:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:24.401962 | orchestrator | 2026-01-03 01:28:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:24.402093 | orchestrator | 2026-01-03 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:27.455856 | orchestrator | 2026-01-03 01:28:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:27.459685 | orchestrator | 2026-01-03 01:28:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:27.459772 | orchestrator | 2026-01-03 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:30.506407 | orchestrator | 2026-01-03 01:28:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:30.509990 | orchestrator | 2026-01-03 01:28:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:30.510167 | orchestrator | 2026-01-03 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:33.554359 | orchestrator | 2026-01-03 01:28:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:33.557475 | orchestrator | 2026-01-03 01:28:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:33.557561 | orchestrator | 2026-01-03 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:36.604756 | orchestrator | 2026-01-03 01:28:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:36.607199 | orchestrator | 2026-01-03 01:28:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:36.607332 | orchestrator | 2026-01-03 01:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:39.654179 | orchestrator | 2026-01-03 01:28:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:39.656118 | orchestrator | 2026-01-03 01:28:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:39.656191 | orchestrator | 2026-01-03 01:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:42.703698 | orchestrator | 2026-01-03 01:28:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:42.704778 | orchestrator | 2026-01-03 01:28:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:42.704814 | orchestrator | 2026-01-03 01:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:45.755496 | orchestrator | 2026-01-03 01:28:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:45.757015 | orchestrator | 2026-01-03 01:28:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:45.757061 | orchestrator | 2026-01-03 01:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:48.803713 | orchestrator | 2026-01-03 01:28:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:48.806425 | orchestrator | 2026-01-03 01:28:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:48.806535 | orchestrator | 2026-01-03 01:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:51.850001 | orchestrator | 2026-01-03 01:28:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:51.852469 | orchestrator | 2026-01-03 01:28:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:51.852533 | orchestrator | 2026-01-03 01:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:54.899712 | orchestrator | 2026-01-03 01:28:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:54.901499 | orchestrator | 2026-01-03 01:28:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:54.901557 | orchestrator | 2026-01-03 01:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:57.947008 | orchestrator | 2026-01-03 01:28:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:28:57.949797 | orchestrator | 2026-01-03 01:28:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:28:57.949863 | orchestrator | 2026-01-03 01:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:00.994123 | orchestrator | 2026-01-03 01:29:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:00.995747 | orchestrator | 2026-01-03 01:29:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:00.995795 | orchestrator | 2026-01-03 01:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:04.043596 | orchestrator | 2026-01-03 01:29:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:04.045017 | orchestrator | 2026-01-03 01:29:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:04.045088 | orchestrator | 2026-01-03 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:07.090476 | orchestrator | 2026-01-03 01:29:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:07.092138 | orchestrator | 2026-01-03 01:29:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:07.092317 | orchestrator | 2026-01-03 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:10.141116 | orchestrator | 2026-01-03 01:29:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:10.144033 | orchestrator | 2026-01-03 01:29:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:10.144092 | orchestrator | 2026-01-03 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:13.192607 | orchestrator | 2026-01-03 01:29:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:13.194975 | orchestrator | 2026-01-03 01:29:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:13.195024 | orchestrator | 2026-01-03 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:16.246265 | orchestrator | 2026-01-03 01:29:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:16.248349 | orchestrator | 2026-01-03 01:29:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:16.248419 | orchestrator | 2026-01-03 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:19.297649 | orchestrator | 2026-01-03 01:29:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:19.299487 | orchestrator | 2026-01-03 01:29:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:19.299786 | orchestrator | 2026-01-03 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:22.342265 | orchestrator | 2026-01-03 01:29:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:22.344287 | orchestrator | 2026-01-03 01:29:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:22.344322 | orchestrator | 2026-01-03 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:25.390358 | orchestrator | 2026-01-03 01:29:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:25.392268 | orchestrator | 2026-01-03 01:29:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:25.392320 | orchestrator | 2026-01-03 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:28.440200 | orchestrator | 2026-01-03 01:29:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:28.441878 | orchestrator | 2026-01-03 01:29:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:28.441943 | orchestrator | 2026-01-03 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:31.482457 | orchestrator | 2026-01-03 01:29:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:31.483179 | orchestrator | 2026-01-03 01:29:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:31.483541 | orchestrator | 2026-01-03 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:34.534169 | orchestrator | 2026-01-03 01:29:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:34.536931 | orchestrator | 2026-01-03 01:29:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:34.536997 | orchestrator | 2026-01-03 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:37.585247 | orchestrator | 2026-01-03 01:29:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:37.586829 | orchestrator | 2026-01-03 01:29:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:37.586880 | orchestrator | 2026-01-03 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:40.635183 | orchestrator | 2026-01-03 01:29:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:40.637177 | orchestrator | 2026-01-03 01:29:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:40.637283 | orchestrator | 2026-01-03 01:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:43.683864 | orchestrator | 2026-01-03 01:29:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:43.685990 | orchestrator | 2026-01-03 01:29:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:43.686121 | orchestrator | 2026-01-03 01:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:46.734281 | orchestrator | 2026-01-03 01:29:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:46.736443 | orchestrator | 2026-01-03 01:29:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:46.736526 | orchestrator | 2026-01-03 01:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:49.783867 | orchestrator | 2026-01-03 01:29:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:49.785978 | orchestrator | 2026-01-03 01:29:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:49.786011 | orchestrator | 2026-01-03 01:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:52.834514 | orchestrator | 2026-01-03 01:29:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:52.837941 | orchestrator | 2026-01-03 01:29:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:52.838055 | orchestrator | 2026-01-03 01:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:55.883265 | orchestrator | 2026-01-03 01:29:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:55.884770 | orchestrator | 2026-01-03 01:29:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:55.884810 | orchestrator | 2026-01-03 01:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:58.933028 | orchestrator | 2026-01-03 01:29:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:29:58.935396 | orchestrator | 2026-01-03 01:29:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:29:58.935451 | orchestrator | 2026-01-03 01:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:01.976898 | orchestrator | 2026-01-03 01:30:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:01.977844 | orchestrator | 2026-01-03 01:30:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:01.978262 | orchestrator | 2026-01-03 01:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:05.022742 | orchestrator | 2026-01-03 01:30:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:05.024441 | orchestrator | 2026-01-03 01:30:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:05.024476 | orchestrator | 2026-01-03 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:08.069607 | orchestrator | 2026-01-03 01:30:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:08.070764 | orchestrator | 2026-01-03 01:30:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:08.070828 | orchestrator | 2026-01-03 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:11.121828 | orchestrator | 2026-01-03 01:30:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:11.124001 | orchestrator | 2026-01-03 01:30:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:11.124070 | orchestrator | 2026-01-03 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:14.170931 | orchestrator | 2026-01-03 01:30:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:14.172348 | orchestrator | 2026-01-03 01:30:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:14.172447 | orchestrator | 2026-01-03 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:17.221335 | orchestrator | 2026-01-03 01:30:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:17.224931 | orchestrator | 2026-01-03 01:30:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:17.225003 | orchestrator | 2026-01-03 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:20.271128 | orchestrator | 2026-01-03 01:30:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:20.271288 | orchestrator | 2026-01-03 01:30:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:20.271302 | orchestrator | 2026-01-03 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:23.321284 | orchestrator | 2026-01-03 01:30:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:23.322826 | orchestrator | 2026-01-03 01:30:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:23.322890 | orchestrator | 2026-01-03 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:26.379051 | orchestrator | 2026-01-03 01:30:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:26.379131 | orchestrator | 2026-01-03 01:30:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:26.379137 | orchestrator | 2026-01-03 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:29.424166 | orchestrator | 2026-01-03 01:30:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:29.427699 | orchestrator | 2026-01-03 01:30:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:29.427772 | orchestrator | 2026-01-03 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:32.476846 | orchestrator | 2026-01-03 01:30:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:32.480925 | orchestrator | 2026-01-03 01:30:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:32.481011 | orchestrator | 2026-01-03 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:35.530486 | orchestrator | 2026-01-03 01:30:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:35.533250 | orchestrator | 2026-01-03 01:30:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:35.533318 | orchestrator | 2026-01-03 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:38.580617 | orchestrator | 2026-01-03 01:30:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:38.584158 | orchestrator | 2026-01-03 01:30:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:38.584252 | orchestrator | 2026-01-03 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:41.633439 | orchestrator | 2026-01-03 01:30:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:41.637299 | orchestrator | 2026-01-03 01:30:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:41.637391 | orchestrator | 2026-01-03 01:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:44.688201 | orchestrator | 2026-01-03 01:30:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:44.691478 | orchestrator | 2026-01-03 01:30:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:44.691566 | orchestrator | 2026-01-03 01:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:47.739948 | orchestrator | 2026-01-03 01:30:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:47.742890 | orchestrator | 2026-01-03 01:30:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:47.742960 | orchestrator | 2026-01-03 01:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:50.794730 | orchestrator | 2026-01-03 01:30:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:50.798411 | orchestrator | 2026-01-03 01:30:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:50.798547 | orchestrator | 2026-01-03 01:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:53.845563 | orchestrator | 2026-01-03 01:30:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:53.846550 | orchestrator | 2026-01-03 01:30:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:53.846605 | orchestrator | 2026-01-03 01:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:56.890325 | orchestrator | 2026-01-03 01:30:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:56.891895 | orchestrator | 2026-01-03 01:30:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:56.891940 | orchestrator | 2026-01-03 01:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:59.941421 | orchestrator | 2026-01-03 01:30:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:30:59.942949 | orchestrator | 2026-01-03 01:30:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:30:59.943120 | orchestrator | 2026-01-03 01:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:02.984718 | orchestrator | 2026-01-03 01:31:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:02.984903 | orchestrator | 2026-01-03 01:31:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:02.984949 | orchestrator | 2026-01-03 01:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:06.030494 | orchestrator | 2026-01-03 01:31:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:06.032124 | orchestrator | 2026-01-03 01:31:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:06.032269 | orchestrator | 2026-01-03 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:09.075935 | orchestrator | 2026-01-03 01:31:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:09.078277 | orchestrator | 2026-01-03 01:31:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:09.078344 | orchestrator | 2026-01-03 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:12.124799 | orchestrator | 2026-01-03 01:31:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:12.127847 | orchestrator | 2026-01-03 01:31:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:12.127917 | orchestrator | 2026-01-03 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:15.177002 | orchestrator | 2026-01-03 01:31:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:15.179994 | orchestrator | 2026-01-03 01:31:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:15.180381 | orchestrator | 2026-01-03 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:18.227072 | orchestrator | 2026-01-03 01:31:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:18.228150 | orchestrator | 2026-01-03 01:31:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:18.228283 | orchestrator | 2026-01-03 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:21.274439 | orchestrator | 2026-01-03 01:31:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:21.274729 | orchestrator | 2026-01-03 01:31:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:21.274750 | orchestrator | 2026-01-03 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:24.324625 | orchestrator | 2026-01-03 01:31:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:24.326527 | orchestrator | 2026-01-03 01:31:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:24.326592 | orchestrator | 2026-01-03 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:27.372560 | orchestrator | 2026-01-03 01:31:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:27.373950 | orchestrator | 2026-01-03 01:31:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:27.374045 | orchestrator | 2026-01-03 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:30.422292 | orchestrator | 2026-01-03 01:31:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:30.424148 | orchestrator | 2026-01-03 01:31:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:30.424278 | orchestrator | 2026-01-03 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:33.467061 | orchestrator | 2026-01-03 01:31:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:33.467976 | orchestrator | 2026-01-03 01:31:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:33.468128 | orchestrator | 2026-01-03 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:36.513685 | orchestrator | 2026-01-03 01:31:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:36.515181 | orchestrator | 2026-01-03 01:31:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:36.515254 | orchestrator | 2026-01-03 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:39.561884 | orchestrator | 2026-01-03 01:31:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:39.564600 | orchestrator | 2026-01-03 01:31:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:39.564664 | orchestrator | 2026-01-03 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:42.611112 | orchestrator | 2026-01-03 01:31:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:42.612393 | orchestrator | 2026-01-03 01:31:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:42.612556 | orchestrator | 2026-01-03 01:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:45.661186 | orchestrator | 2026-01-03 01:31:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:45.662656 | orchestrator | 2026-01-03 01:31:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:45.662829 | orchestrator | 2026-01-03 01:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:48.710431 | orchestrator | 2026-01-03 01:31:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:48.713227 | orchestrator | 2026-01-03 01:31:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:48.713602 | orchestrator | 2026-01-03 01:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:51.760356 | orchestrator | 2026-01-03 01:31:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:51.762864 | orchestrator | 2026-01-03 01:31:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:51.762923 | orchestrator | 2026-01-03 01:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:54.806420 | orchestrator | 2026-01-03 01:31:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:54.808292 | orchestrator | 2026-01-03 01:31:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:54.808352 | orchestrator | 2026-01-03 01:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:57.851554 | orchestrator | 2026-01-03 01:31:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:31:57.853673 | orchestrator | 2026-01-03 01:31:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:31:57.853742 | orchestrator | 2026-01-03 01:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:00.895999 | orchestrator | 2026-01-03 01:32:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:00.896552 | orchestrator | 2026-01-03 01:32:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:00.896612 | orchestrator | 2026-01-03 01:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:03.941830 | orchestrator | 2026-01-03 01:32:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:03.944262 | orchestrator | 2026-01-03 01:32:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:03.944338 | orchestrator | 2026-01-03 01:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:06.991059 | orchestrator | 2026-01-03 01:32:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:06.993033 | orchestrator | 2026-01-03 01:32:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:06.993079 | orchestrator | 2026-01-03 01:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:10.042567 | orchestrator | 2026-01-03 01:32:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:10.044614 | orchestrator | 2026-01-03 01:32:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:10.044651 | orchestrator | 2026-01-03 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:13.092611 | orchestrator | 2026-01-03 01:32:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:13.095755 | orchestrator | 2026-01-03 01:32:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:13.095823 | orchestrator | 2026-01-03 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:16.148695 | orchestrator | 2026-01-03 01:32:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:16.152487 | orchestrator | 2026-01-03 01:32:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:16.152558 | orchestrator | 2026-01-03 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:19.198325 | orchestrator | 2026-01-03 01:32:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:19.202724 | orchestrator | 2026-01-03 01:32:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:19.202800 | orchestrator | 2026-01-03 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:22.251643 | orchestrator | 2026-01-03 01:32:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:22.254621 | orchestrator | 2026-01-03 01:32:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:22.254681 | orchestrator | 2026-01-03 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:25.303108 | orchestrator | 2026-01-03 01:32:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:25.305534 | orchestrator | 2026-01-03 01:32:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:25.305655 | orchestrator | 2026-01-03 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:28.350408 | orchestrator | 2026-01-03 01:32:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:28.352615 | orchestrator | 2026-01-03 01:32:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:28.352677 | orchestrator | 2026-01-03 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:31.395783 | orchestrator | 2026-01-03 01:32:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:31.395882 | orchestrator | 2026-01-03 01:32:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:31.395913 | orchestrator | 2026-01-03 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:34.438464 | orchestrator | 2026-01-03 01:32:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:34.440000 | orchestrator | 2026-01-03 01:32:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:34.440380 | orchestrator | 2026-01-03 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:37.487149 | orchestrator | 2026-01-03 01:32:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:37.489331 | orchestrator | 2026-01-03 01:32:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:37.489408 | orchestrator | 2026-01-03 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:40.535120 | orchestrator | 2026-01-03 01:32:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:40.536692 | orchestrator | 2026-01-03 01:32:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:40.536801 | orchestrator | 2026-01-03 01:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:43.582550 | orchestrator | 2026-01-03 01:32:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:43.585190 | orchestrator | 2026-01-03 01:32:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:43.585321 | orchestrator | 2026-01-03 01:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:46.629409 | orchestrator | 2026-01-03 01:32:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:46.631308 | orchestrator | 2026-01-03 01:32:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:46.631393 | orchestrator | 2026-01-03 01:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:49.678336 | orchestrator | 2026-01-03 01:32:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:49.680240 | orchestrator | 2026-01-03 01:32:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:49.680307 | orchestrator | 2026-01-03 01:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:52.724430 | orchestrator | 2026-01-03 01:32:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:52.726561 | orchestrator | 2026-01-03 01:32:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:52.727057 | orchestrator | 2026-01-03 01:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:55.769554 | orchestrator | 2026-01-03 01:32:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:55.771886 | orchestrator | 2026-01-03 01:32:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:55.771946 | orchestrator | 2026-01-03 01:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:58.818903 | orchestrator | 2026-01-03 01:32:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:32:58.820842 | orchestrator | 2026-01-03 01:32:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:32:58.820899 | orchestrator | 2026-01-03 01:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:01.867882 | orchestrator | 2026-01-03 01:33:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:01.869519 | orchestrator | 2026-01-03 01:33:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:01.870160 | orchestrator | 2026-01-03 01:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:04.917639 | orchestrator | 2026-01-03 01:33:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:04.918978 | orchestrator | 2026-01-03 01:33:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:04.919028 | orchestrator | 2026-01-03 01:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:07.967601 | orchestrator | 2026-01-03 01:33:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:07.969488 | orchestrator | 2026-01-03 01:33:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:07.969525 | orchestrator | 2026-01-03 01:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:11.018325 | orchestrator | 2026-01-03 01:33:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:11.019962 | orchestrator | 2026-01-03 01:33:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:11.020133 | orchestrator | 2026-01-03 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:14.070959 | orchestrator | 2026-01-03 01:33:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:14.071947 | orchestrator | 2026-01-03 01:33:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:14.072059 | orchestrator | 2026-01-03 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:17.118697 | orchestrator | 2026-01-03 01:33:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:17.121183 | orchestrator | 2026-01-03 01:33:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:17.121410 | orchestrator | 2026-01-03 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:20.171363 | orchestrator | 2026-01-03 01:33:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:20.173101 | orchestrator | 2026-01-03 01:33:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:20.173145 | orchestrator | 2026-01-03 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:23.214586 | orchestrator | 2026-01-03 01:33:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:23.216069 | orchestrator | 2026-01-03 01:33:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:23.216118 | orchestrator | 2026-01-03 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:26.260435 | orchestrator | 2026-01-03 01:33:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:26.262157 | orchestrator | 2026-01-03 01:33:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:26.262267 | orchestrator | 2026-01-03 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:29.306958 | orchestrator | 2026-01-03 01:33:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:29.308693 | orchestrator | 2026-01-03 01:33:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:29.308753 | orchestrator | 2026-01-03 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:32.341134 | orchestrator | 2026-01-03 01:33:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:32.342472 | orchestrator | 2026-01-03 01:33:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:32.342536 | orchestrator | 2026-01-03 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:35.389562 | orchestrator | 2026-01-03 01:33:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:35.391969 | orchestrator | 2026-01-03 01:33:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:35.392342 | orchestrator | 2026-01-03 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:38.439845 | orchestrator | 2026-01-03 01:33:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:38.443529 | orchestrator | 2026-01-03 01:33:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:38.443613 | orchestrator | 2026-01-03 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:41.486266 | orchestrator | 2026-01-03 01:33:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:41.487451 | orchestrator | 2026-01-03 01:33:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:41.487554 | orchestrator | 2026-01-03 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:44.529431 | orchestrator | 2026-01-03 01:33:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:44.531222 | orchestrator | 2026-01-03 01:33:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:44.531290 | orchestrator | 2026-01-03 01:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:47.577582 | orchestrator | 2026-01-03 01:33:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:47.579274 | orchestrator | 2026-01-03 01:33:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:47.579496 | orchestrator | 2026-01-03 01:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:50.629039 | orchestrator | 2026-01-03 01:33:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:50.631077 | orchestrator | 2026-01-03 01:33:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:50.631136 | orchestrator | 2026-01-03 01:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:53.670095 | orchestrator | 2026-01-03 01:33:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:53.670803 | orchestrator | 2026-01-03 01:33:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:53.670864 | orchestrator | 2026-01-03 01:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:56.719135 | orchestrator | 2026-01-03 01:33:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:56.720782 | orchestrator | 2026-01-03 01:33:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:56.720835 | orchestrator | 2026-01-03 01:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:59.765021 | orchestrator | 2026-01-03 01:33:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:33:59.766402 | orchestrator | 2026-01-03 01:33:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:33:59.766486 | orchestrator | 2026-01-03 01:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:02.809708 | orchestrator | 2026-01-03 01:34:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:02.811625 | orchestrator | 2026-01-03 01:34:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:02.811920 | orchestrator | 2026-01-03 01:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:05.859924 | orchestrator | 2026-01-03 01:34:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:05.861563 | orchestrator | 2026-01-03 01:34:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:05.861902 | orchestrator | 2026-01-03 01:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:08.907933 | orchestrator | 2026-01-03 01:34:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:08.909710 | orchestrator | 2026-01-03 01:34:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:08.909774 | orchestrator | 2026-01-03 01:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:11.955026 | orchestrator | 2026-01-03 01:34:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:11.957784 | orchestrator | 2026-01-03 01:34:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:11.957858 | orchestrator | 2026-01-03 01:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:15.002592 | orchestrator | 2026-01-03 01:34:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:15.004443 | orchestrator | 2026-01-03 01:34:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:15.004565 | orchestrator | 2026-01-03 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:18.052520 | orchestrator | 2026-01-03 01:34:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:18.053644 | orchestrator | 2026-01-03 01:34:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:18.053705 | orchestrator | 2026-01-03 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:21.100766 | orchestrator | 2026-01-03 01:34:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:21.102537 | orchestrator | 2026-01-03 01:34:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:21.102598 | orchestrator | 2026-01-03 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:24.152838 | orchestrator | 2026-01-03 01:34:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:24.154367 | orchestrator | 2026-01-03 01:34:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:24.154400 | orchestrator | 2026-01-03 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:27.201316 | orchestrator | 2026-01-03 01:34:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:27.203434 | orchestrator | 2026-01-03 01:34:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:27.203559 | orchestrator | 2026-01-03 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:30.247531 | orchestrator | 2026-01-03 01:34:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:30.248718 | orchestrator | 2026-01-03 01:34:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:30.248766 | orchestrator | 2026-01-03 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:33.290265 | orchestrator | 2026-01-03 01:34:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:33.291488 | orchestrator | 2026-01-03 01:34:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:33.291553 | orchestrator | 2026-01-03 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:36.336067 | orchestrator | 2026-01-03 01:34:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:36.338319 | orchestrator | 2026-01-03 01:34:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:36.338406 | orchestrator | 2026-01-03 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:39.387694 | orchestrator | 2026-01-03 01:34:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:39.388628 | orchestrator | 2026-01-03 01:34:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:39.388680 | orchestrator | 2026-01-03 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:42.439727 | orchestrator | 2026-01-03 01:34:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:42.441357 | orchestrator | 2026-01-03 01:34:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:42.441425 | orchestrator | 2026-01-03 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:45.488118 | orchestrator | 2026-01-03 01:34:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:45.489872 | orchestrator | 2026-01-03 01:34:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:45.489924 | orchestrator | 2026-01-03 01:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:48.535277 | orchestrator | 2026-01-03 01:34:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:48.536719 | orchestrator | 2026-01-03 01:34:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:48.536785 | orchestrator | 2026-01-03 01:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:51.577351 | orchestrator | 2026-01-03 01:34:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:51.578506 | orchestrator | 2026-01-03 01:34:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:51.578551 | orchestrator | 2026-01-03 01:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:54.622107 | orchestrator | 2026-01-03 01:34:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:54.623890 | orchestrator | 2026-01-03 01:34:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:54.623958 | orchestrator | 2026-01-03 01:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:57.672157 | orchestrator | 2026-01-03 01:34:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:34:57.674326 | orchestrator | 2026-01-03 01:34:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:34:57.674389 | orchestrator | 2026-01-03 01:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:00.720696 | orchestrator | 2026-01-03 01:35:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:00.722089 | orchestrator | 2026-01-03 01:35:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:00.722137 | orchestrator | 2026-01-03 01:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:03.763888 | orchestrator | 2026-01-03 01:35:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:03.767364 | orchestrator | 2026-01-03 01:35:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:03.767444 | orchestrator | 2026-01-03 01:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:06.816807 | orchestrator | 2026-01-03 01:35:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:06.820126 | orchestrator | 2026-01-03 01:35:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:06.820298 | orchestrator | 2026-01-03 01:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:09.868923 | orchestrator | 2026-01-03 01:35:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:09.872378 | orchestrator | 2026-01-03 01:35:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:09.872450 | orchestrator | 2026-01-03 01:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:12.923171 | orchestrator | 2026-01-03 01:35:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:12.924821 | orchestrator | 2026-01-03 01:35:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:12.924979 | orchestrator | 2026-01-03 01:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:15.974322 | orchestrator | 2026-01-03 01:35:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:15.976307 | orchestrator | 2026-01-03 01:35:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:15.976408 | orchestrator | 2026-01-03 01:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:19.025658 | orchestrator | 2026-01-03 01:35:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:19.028152 | orchestrator | 2026-01-03 01:35:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:19.028285 | orchestrator | 2026-01-03 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:22.078939 | orchestrator | 2026-01-03 01:35:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:22.080788 | orchestrator | 2026-01-03 01:35:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:22.080866 | orchestrator | 2026-01-03 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:25.119730 | orchestrator | 2026-01-03 01:35:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:25.122788 | orchestrator | 2026-01-03 01:35:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:25.122855 | orchestrator | 2026-01-03 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:28.169579 | orchestrator | 2026-01-03 01:35:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:28.172219 | orchestrator | 2026-01-03 01:35:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:28.172306 | orchestrator | 2026-01-03 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:31.218793 | orchestrator | 2026-01-03 01:35:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:31.220466 | orchestrator | 2026-01-03 01:35:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:31.220513 | orchestrator | 2026-01-03 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:34.266104 | orchestrator | 2026-01-03 01:35:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:34.268414 | orchestrator | 2026-01-03 01:35:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:34.268502 | orchestrator | 2026-01-03 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:37.318373 | orchestrator | 2026-01-03 01:35:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:37.319831 | orchestrator | 2026-01-03 01:35:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:37.319997 | orchestrator | 2026-01-03 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:40.365310 | orchestrator | 2026-01-03 01:35:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:40.366676 | orchestrator | 2026-01-03 01:35:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:40.366745 | orchestrator | 2026-01-03 01:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:43.416307 | orchestrator | 2026-01-03 01:35:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:43.417772 | orchestrator | 2026-01-03 01:35:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:43.417864 | orchestrator | 2026-01-03 01:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:46.467024 | orchestrator | 2026-01-03 01:35:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:46.468783 | orchestrator | 2026-01-03 01:35:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:46.468843 | orchestrator | 2026-01-03 01:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:49.517163 | orchestrator | 2026-01-03 01:35:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:49.518319 | orchestrator | 2026-01-03 01:35:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:49.518447 | orchestrator | 2026-01-03 01:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:52.563558 | orchestrator | 2026-01-03 01:35:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:52.566335 | orchestrator | 2026-01-03 01:35:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:52.566392 | orchestrator | 2026-01-03 01:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:55.614837 | orchestrator | 2026-01-03 01:35:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:55.616945 | orchestrator | 2026-01-03 01:35:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:55.617048 | orchestrator | 2026-01-03 01:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:58.663421 | orchestrator | 2026-01-03 01:35:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:35:58.665404 | orchestrator | 2026-01-03 01:35:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:35:58.665506 | orchestrator | 2026-01-03 01:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:01.707341 | orchestrator | 2026-01-03 01:36:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:01.708859 | orchestrator | 2026-01-03 01:36:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:01.708915 | orchestrator | 2026-01-03 01:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:04.746650 | orchestrator | 2026-01-03 01:36:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:04.748954 | orchestrator | 2026-01-03 01:36:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:04.749086 | orchestrator | 2026-01-03 01:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:07.799763 | orchestrator | 2026-01-03 01:36:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:07.800949 | orchestrator | 2026-01-03 01:36:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:07.800984 | orchestrator | 2026-01-03 01:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:10.846125 | orchestrator | 2026-01-03 01:36:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:10.847916 | orchestrator | 2026-01-03 01:36:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:10.847982 | orchestrator | 2026-01-03 01:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:13.889954 | orchestrator | 2026-01-03 01:36:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:13.891752 | orchestrator | 2026-01-03 01:36:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:13.891791 | orchestrator | 2026-01-03 01:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:16.934406 | orchestrator | 2026-01-03 01:36:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:16.936537 | orchestrator | 2026-01-03 01:36:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:16.936603 | orchestrator | 2026-01-03 01:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:19.982360 | orchestrator | 2026-01-03 01:36:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:19.984276 | orchestrator | 2026-01-03 01:36:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:19.984344 | orchestrator | 2026-01-03 01:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:23.031583 | orchestrator | 2026-01-03 01:36:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:23.031763 | orchestrator | 2026-01-03 01:36:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:23.031782 | orchestrator | 2026-01-03 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:26.077638 | orchestrator | 2026-01-03 01:36:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:26.079447 | orchestrator | 2026-01-03 01:36:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:26.079489 | orchestrator | 2026-01-03 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:29.118951 | orchestrator | 2026-01-03 01:36:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:29.120692 | orchestrator | 2026-01-03 01:36:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:29.120814 | orchestrator | 2026-01-03 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:32.166406 | orchestrator | 2026-01-03 01:36:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:32.168939 | orchestrator | 2026-01-03 01:36:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:32.169043 | orchestrator | 2026-01-03 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:35.216477 | orchestrator | 2026-01-03 01:36:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:35.217965 | orchestrator | 2026-01-03 01:36:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:35.218084 | orchestrator | 2026-01-03 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:38.261948 | orchestrator | 2026-01-03 01:36:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:38.263045 | orchestrator | 2026-01-03 01:36:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:38.263090 | orchestrator | 2026-01-03 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:41.307808 | orchestrator | 2026-01-03 01:36:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:41.309888 | orchestrator | 2026-01-03 01:36:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:41.309925 | orchestrator | 2026-01-03 01:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:44.356505 | orchestrator | 2026-01-03 01:36:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:44.357973 | orchestrator | 2026-01-03 01:36:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:44.358239 | orchestrator | 2026-01-03 01:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:47.406095 | orchestrator | 2026-01-03 01:36:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:47.407896 | orchestrator | 2026-01-03 01:36:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:47.407942 | orchestrator | 2026-01-03 01:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:50.455410 | orchestrator | 2026-01-03 01:36:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:50.456428 | orchestrator | 2026-01-03 01:36:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:50.456490 | orchestrator | 2026-01-03 01:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:53.498300 | orchestrator | 2026-01-03 01:36:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:53.499430 | orchestrator | 2026-01-03 01:36:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:53.499488 | orchestrator | 2026-01-03 01:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:56.542436 | orchestrator | 2026-01-03 01:36:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:56.544034 | orchestrator | 2026-01-03 01:36:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:56.544120 | orchestrator | 2026-01-03 01:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:59.590303 | orchestrator | 2026-01-03 01:36:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:36:59.592624 | orchestrator | 2026-01-03 01:36:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:36:59.592685 | orchestrator | 2026-01-03 01:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:02.634717 | orchestrator | 2026-01-03 01:37:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:02.637268 | orchestrator | 2026-01-03 01:37:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:02.637411 | orchestrator | 2026-01-03 01:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:05.687659 | orchestrator | 2026-01-03 01:37:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:05.689611 | orchestrator | 2026-01-03 01:37:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:05.689750 | orchestrator | 2026-01-03 01:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:08.733840 | orchestrator | 2026-01-03 01:37:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:08.735608 | orchestrator | 2026-01-03 01:37:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:08.735773 | orchestrator | 2026-01-03 01:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:11.781905 | orchestrator | 2026-01-03 01:37:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:11.783944 | orchestrator | 2026-01-03 01:37:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:11.784010 | orchestrator | 2026-01-03 01:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:14.827614 | orchestrator | 2026-01-03 01:37:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:14.828990 | orchestrator | 2026-01-03 01:37:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:14.829062 | orchestrator | 2026-01-03 01:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:17.876098 | orchestrator | 2026-01-03 01:37:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:17.877650 | orchestrator | 2026-01-03 01:37:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:17.877705 | orchestrator | 2026-01-03 01:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:20.922089 | orchestrator | 2026-01-03 01:37:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:20.923699 | orchestrator | 2026-01-03 01:37:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:20.923744 | orchestrator | 2026-01-03 01:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:23.966789 | orchestrator | 2026-01-03 01:37:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:23.968780 | orchestrator | 2026-01-03 01:37:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:23.969181 | orchestrator | 2026-01-03 01:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:27.018683 | orchestrator | 2026-01-03 01:37:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:27.021016 | orchestrator | 2026-01-03 01:37:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:27.021106 | orchestrator | 2026-01-03 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:30.067233 | orchestrator | 2026-01-03 01:37:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:30.072033 | orchestrator | 2026-01-03 01:37:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:30.072129 | orchestrator | 2026-01-03 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:33.116619 | orchestrator | 2026-01-03 01:37:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:33.117345 | orchestrator | 2026-01-03 01:37:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:33.118082 | orchestrator | 2026-01-03 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:36.162555 | orchestrator | 2026-01-03 01:37:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:36.164320 | orchestrator | 2026-01-03 01:37:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:36.164363 | orchestrator | 2026-01-03 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:39.210499 | orchestrator | 2026-01-03 01:37:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:39.212323 | orchestrator | 2026-01-03 01:37:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:39.212370 | orchestrator | 2026-01-03 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:42.261902 | orchestrator | 2026-01-03 01:37:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:42.263269 | orchestrator | 2026-01-03 01:37:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:42.263359 | orchestrator | 2026-01-03 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:45.305400 | orchestrator | 2026-01-03 01:37:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:45.307051 | orchestrator | 2026-01-03 01:37:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:45.307120 | orchestrator | 2026-01-03 01:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:48.353464 | orchestrator | 2026-01-03 01:37:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:48.355201 | orchestrator | 2026-01-03 01:37:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:48.355222 | orchestrator | 2026-01-03 01:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:51.399662 | orchestrator | 2026-01-03 01:37:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:51.402771 | orchestrator | 2026-01-03 01:37:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:51.402866 | orchestrator | 2026-01-03 01:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:54.444735 | orchestrator | 2026-01-03 01:37:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:54.446322 | orchestrator | 2026-01-03 01:37:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:54.446357 | orchestrator | 2026-01-03 01:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:57.495217 | orchestrator | 2026-01-03 01:37:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:37:57.496776 | orchestrator | 2026-01-03 01:37:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:37:57.496843 | orchestrator | 2026-01-03 01:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:00.539593 | orchestrator | 2026-01-03 01:38:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:00.540798 | orchestrator | 2026-01-03 01:38:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:00.540854 | orchestrator | 2026-01-03 01:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:03.588452 | orchestrator | 2026-01-03 01:38:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:03.590406 | orchestrator | 2026-01-03 01:38:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:03.590467 | orchestrator | 2026-01-03 01:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:06.639372 | orchestrator | 2026-01-03 01:38:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:06.642507 | orchestrator | 2026-01-03 01:38:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:06.642598 | orchestrator | 2026-01-03 01:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:09.689727 | orchestrator | 2026-01-03 01:38:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:09.693305 | orchestrator | 2026-01-03 01:38:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:09.693373 | orchestrator | 2026-01-03 01:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:12.744689 | orchestrator | 2026-01-03 01:38:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:12.745607 | orchestrator | 2026-01-03 01:38:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:12.745670 | orchestrator | 2026-01-03 01:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:15.790561 | orchestrator | 2026-01-03 01:38:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:15.792778 | orchestrator | 2026-01-03 01:38:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:15.792892 | orchestrator | 2026-01-03 01:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:18.842529 | orchestrator | 2026-01-03 01:38:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:18.844284 | orchestrator | 2026-01-03 01:38:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:18.844319 | orchestrator | 2026-01-03 01:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:21.890667 | orchestrator | 2026-01-03 01:38:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:21.892823 | orchestrator | 2026-01-03 01:38:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:21.892891 | orchestrator | 2026-01-03 01:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:24.940647 | orchestrator | 2026-01-03 01:38:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:24.941381 | orchestrator | 2026-01-03 01:38:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:24.941454 | orchestrator | 2026-01-03 01:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:27.985571 | orchestrator | 2026-01-03 01:38:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:27.987516 | orchestrator | 2026-01-03 01:38:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:27.987576 | orchestrator | 2026-01-03 01:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:31.031952 | orchestrator | 2026-01-03 01:38:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:31.032667 | orchestrator | 2026-01-03 01:38:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:31.032723 | orchestrator | 2026-01-03 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:34.076700 | orchestrator | 2026-01-03 01:38:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:34.078824 | orchestrator | 2026-01-03 01:38:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:34.078932 | orchestrator | 2026-01-03 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:37.119392 | orchestrator | 2026-01-03 01:38:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:37.121016 | orchestrator | 2026-01-03 01:38:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:37.121082 | orchestrator | 2026-01-03 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:40.166382 | orchestrator | 2026-01-03 01:38:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:40.167661 | orchestrator | 2026-01-03 01:38:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:40.167709 | orchestrator | 2026-01-03 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:43.213467 | orchestrator | 2026-01-03 01:38:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:43.215043 | orchestrator | 2026-01-03 01:38:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:43.215073 | orchestrator | 2026-01-03 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:46.255699 | orchestrator | 2026-01-03 01:38:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:46.257561 | orchestrator | 2026-01-03 01:38:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:46.257609 | orchestrator | 2026-01-03 01:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:49.302557 | orchestrator | 2026-01-03 01:38:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:49.303900 | orchestrator | 2026-01-03 01:38:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:49.303977 | orchestrator | 2026-01-03 01:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:52.350951 | orchestrator | 2026-01-03 01:38:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:52.353176 | orchestrator | 2026-01-03 01:38:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:52.353384 | orchestrator | 2026-01-03 01:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:55.398785 | orchestrator | 2026-01-03 01:38:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:55.400079 | orchestrator | 2026-01-03 01:38:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:55.400526 | orchestrator | 2026-01-03 01:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:58.446426 | orchestrator | 2026-01-03 01:38:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:38:58.448271 | orchestrator | 2026-01-03 01:38:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:38:58.448462 | orchestrator | 2026-01-03 01:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:01.489057 | orchestrator | 2026-01-03 01:39:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:01.491047 | orchestrator | 2026-01-03 01:39:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:01.491104 | orchestrator | 2026-01-03 01:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:04.531465 | orchestrator | 2026-01-03 01:39:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:04.533281 | orchestrator | 2026-01-03 01:39:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:04.533492 | orchestrator | 2026-01-03 01:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:07.578770 | orchestrator | 2026-01-03 01:39:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:07.580601 | orchestrator | 2026-01-03 01:39:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:07.580653 | orchestrator | 2026-01-03 01:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:10.617548 | orchestrator | 2026-01-03 01:39:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:10.619216 | orchestrator | 2026-01-03 01:39:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:10.619309 | orchestrator | 2026-01-03 01:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:13.674918 | orchestrator | 2026-01-03 01:39:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:13.676900 | orchestrator | 2026-01-03 01:39:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:13.677061 | orchestrator | 2026-01-03 01:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:16.721749 | orchestrator | 2026-01-03 01:39:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:16.724732 | orchestrator | 2026-01-03 01:39:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:16.724825 | orchestrator | 2026-01-03 01:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:19.769894 | orchestrator | 2026-01-03 01:39:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:19.771592 | orchestrator | 2026-01-03 01:39:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:19.771656 | orchestrator | 2026-01-03 01:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:22.814961 | orchestrator | 2026-01-03 01:39:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:22.816310 | orchestrator | 2026-01-03 01:39:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:22.816366 | orchestrator | 2026-01-03 01:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:25.860753 | orchestrator | 2026-01-03 01:39:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:25.863452 | orchestrator | 2026-01-03 01:39:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:25.863678 | orchestrator | 2026-01-03 01:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:28.910851 | orchestrator | 2026-01-03 01:39:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:28.914675 | orchestrator | 2026-01-03 01:39:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:28.914825 | orchestrator | 2026-01-03 01:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:31.957453 | orchestrator | 2026-01-03 01:39:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:31.958088 | orchestrator | 2026-01-03 01:39:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:31.958245 | orchestrator | 2026-01-03 01:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:35.005297 | orchestrator | 2026-01-03 01:39:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:35.007720 | orchestrator | 2026-01-03 01:39:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:35.007779 | orchestrator | 2026-01-03 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:38.054411 | orchestrator | 2026-01-03 01:39:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:38.055518 | orchestrator | 2026-01-03 01:39:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:38.055555 | orchestrator | 2026-01-03 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:41.101345 | orchestrator | 2026-01-03 01:39:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:41.102939 | orchestrator | 2026-01-03 01:39:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:41.103142 | orchestrator | 2026-01-03 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:44.144764 | orchestrator | 2026-01-03 01:39:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:44.146601 | orchestrator | 2026-01-03 01:39:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:44.146671 | orchestrator | 2026-01-03 01:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:47.187869 | orchestrator | 2026-01-03 01:39:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:47.190203 | orchestrator | 2026-01-03 01:39:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:47.190335 | orchestrator | 2026-01-03 01:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:50.235958 | orchestrator | 2026-01-03 01:39:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:50.239779 | orchestrator | 2026-01-03 01:39:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:50.239876 | orchestrator | 2026-01-03 01:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:53.286709 | orchestrator | 2026-01-03 01:39:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:53.288209 | orchestrator | 2026-01-03 01:39:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:53.288248 | orchestrator | 2026-01-03 01:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:56.332383 | orchestrator | 2026-01-03 01:39:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:56.334979 | orchestrator | 2026-01-03 01:39:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:56.335042 | orchestrator | 2026-01-03 01:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:59.381236 | orchestrator | 2026-01-03 01:39:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:39:59.382388 | orchestrator | 2026-01-03 01:39:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:39:59.382444 | orchestrator | 2026-01-03 01:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:02.423250 | orchestrator | 2026-01-03 01:40:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:02.425000 | orchestrator | 2026-01-03 01:40:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:02.425113 | orchestrator | 2026-01-03 01:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:05.469264 | orchestrator | 2026-01-03 01:40:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:05.469894 | orchestrator | 2026-01-03 01:40:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:05.469925 | orchestrator | 2026-01-03 01:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:08.513575 | orchestrator | 2026-01-03 01:40:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:08.515524 | orchestrator | 2026-01-03 01:40:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:08.515635 | orchestrator | 2026-01-03 01:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:11.559052 | orchestrator | 2026-01-03 01:40:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:11.561585 | orchestrator | 2026-01-03 01:40:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:11.561652 | orchestrator | 2026-01-03 01:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:14.607367 | orchestrator | 2026-01-03 01:40:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:14.609959 | orchestrator | 2026-01-03 01:40:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:14.610043 | orchestrator | 2026-01-03 01:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:17.648762 | orchestrator | 2026-01-03 01:40:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:17.650634 | orchestrator | 2026-01-03 01:40:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:17.650726 | orchestrator | 2026-01-03 01:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:20.696441 | orchestrator | 2026-01-03 01:40:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:20.700394 | orchestrator | 2026-01-03 01:40:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:20.701292 | orchestrator | 2026-01-03 01:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:23.752240 | orchestrator | 2026-01-03 01:40:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:23.754399 | orchestrator | 2026-01-03 01:40:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:23.754514 | orchestrator | 2026-01-03 01:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:26.803246 | orchestrator | 2026-01-03 01:40:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:26.805295 | orchestrator | 2026-01-03 01:40:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:26.805335 | orchestrator | 2026-01-03 01:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:29.855884 | orchestrator | 2026-01-03 01:40:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:29.857126 | orchestrator | 2026-01-03 01:40:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:29.857237 | orchestrator | 2026-01-03 01:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:32.902777 | orchestrator | 2026-01-03 01:40:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:32.905324 | orchestrator | 2026-01-03 01:40:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:32.905378 | orchestrator | 2026-01-03 01:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:35.946724 | orchestrator | 2026-01-03 01:40:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:35.949127 | orchestrator | 2026-01-03 01:40:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:35.949566 | orchestrator | 2026-01-03 01:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:38.996794 | orchestrator | 2026-01-03 01:40:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:38.998317 | orchestrator | 2026-01-03 01:40:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:38.998381 | orchestrator | 2026-01-03 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:42.048562 | orchestrator | 2026-01-03 01:40:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:42.050503 | orchestrator | 2026-01-03 01:40:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:42.050660 | orchestrator | 2026-01-03 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:45.097855 | orchestrator | 2026-01-03 01:40:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:45.100283 | orchestrator | 2026-01-03 01:40:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:45.100333 | orchestrator | 2026-01-03 01:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:48.147253 | orchestrator | 2026-01-03 01:40:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:48.148987 | orchestrator | 2026-01-03 01:40:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:48.149028 | orchestrator | 2026-01-03 01:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:51.196554 | orchestrator | 2026-01-03 01:40:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:51.198173 | orchestrator | 2026-01-03 01:40:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:51.198235 | orchestrator | 2026-01-03 01:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:54.247949 | orchestrator | 2026-01-03 01:40:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:54.249021 | orchestrator | 2026-01-03 01:40:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:54.249100 | orchestrator | 2026-01-03 01:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:57.289740 | orchestrator | 2026-01-03 01:40:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:40:57.291917 | orchestrator | 2026-01-03 01:40:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:40:57.291996 | orchestrator | 2026-01-03 01:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:00.334467 | orchestrator | 2026-01-03 01:41:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:00.336036 | orchestrator | 2026-01-03 01:41:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:00.336113 | orchestrator | 2026-01-03 01:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:03.380197 | orchestrator | 2026-01-03 01:41:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:03.383246 | orchestrator | 2026-01-03 01:41:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:03.383312 | orchestrator | 2026-01-03 01:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:06.429812 | orchestrator | 2026-01-03 01:41:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:06.431569 | orchestrator | 2026-01-03 01:41:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:06.431650 | orchestrator | 2026-01-03 01:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:09.479492 | orchestrator | 2026-01-03 01:41:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:09.482440 | orchestrator | 2026-01-03 01:41:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:09.482526 | orchestrator | 2026-01-03 01:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:12.525910 | orchestrator | 2026-01-03 01:41:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:12.527595 | orchestrator | 2026-01-03 01:41:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:12.527934 | orchestrator | 2026-01-03 01:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:15.571952 | orchestrator | 2026-01-03 01:41:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:15.573890 | orchestrator | 2026-01-03 01:41:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:15.573984 | orchestrator | 2026-01-03 01:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:18.618204 | orchestrator | 2026-01-03 01:41:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:18.619166 | orchestrator | 2026-01-03 01:41:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:18.619219 | orchestrator | 2026-01-03 01:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:21.663656 | orchestrator | 2026-01-03 01:41:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:21.666156 | orchestrator | 2026-01-03 01:41:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:21.666207 | orchestrator | 2026-01-03 01:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:24.709449 | orchestrator | 2026-01-03 01:41:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:24.711547 | orchestrator | 2026-01-03 01:41:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:24.711785 | orchestrator | 2026-01-03 01:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:27.756977 | orchestrator | 2026-01-03 01:41:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:27.758302 | orchestrator | 2026-01-03 01:41:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:27.758370 | orchestrator | 2026-01-03 01:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:30.804404 | orchestrator | 2026-01-03 01:41:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:30.806456 | orchestrator | 2026-01-03 01:41:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:30.806609 | orchestrator | 2026-01-03 01:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:33.844816 | orchestrator | 2026-01-03 01:41:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:33.847394 | orchestrator | 2026-01-03 01:41:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:33.847465 | orchestrator | 2026-01-03 01:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:36.896816 | orchestrator | 2026-01-03 01:41:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:36.900300 | orchestrator | 2026-01-03 01:41:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:36.900398 | orchestrator | 2026-01-03 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:39.941546 | orchestrator | 2026-01-03 01:41:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:39.942566 | orchestrator | 2026-01-03 01:41:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:39.942652 | orchestrator | 2026-01-03 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:42.986355 | orchestrator | 2026-01-03 01:41:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:42.988214 | orchestrator | 2026-01-03 01:41:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:42.988958 | orchestrator | 2026-01-03 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:46.035198 | orchestrator | 2026-01-03 01:41:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:46.037593 | orchestrator | 2026-01-03 01:41:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:46.037682 | orchestrator | 2026-01-03 01:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:49.079210 | orchestrator | 2026-01-03 01:41:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:49.081989 | orchestrator | 2026-01-03 01:41:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:49.082144 | orchestrator | 2026-01-03 01:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:52.128233 | orchestrator | 2026-01-03 01:41:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:52.129179 | orchestrator | 2026-01-03 01:41:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:52.129260 | orchestrator | 2026-01-03 01:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:55.173342 | orchestrator | 2026-01-03 01:41:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:55.174460 | orchestrator | 2026-01-03 01:41:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:55.174892 | orchestrator | 2026-01-03 01:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:58.220176 | orchestrator | 2026-01-03 01:41:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:41:58.221963 | orchestrator | 2026-01-03 01:41:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:41:58.222176 | orchestrator | 2026-01-03 01:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:01.266477 | orchestrator | 2026-01-03 01:42:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:01.268391 | orchestrator | 2026-01-03 01:42:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:01.268534 | orchestrator | 2026-01-03 01:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:04.315101 | orchestrator | 2026-01-03 01:42:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:04.316555 | orchestrator | 2026-01-03 01:42:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:04.316618 | orchestrator | 2026-01-03 01:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:07.364970 | orchestrator | 2026-01-03 01:42:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:07.367278 | orchestrator | 2026-01-03 01:42:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:07.367344 | orchestrator | 2026-01-03 01:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:10.415445 | orchestrator | 2026-01-03 01:42:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:10.418714 | orchestrator | 2026-01-03 01:42:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:10.418755 | orchestrator | 2026-01-03 01:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:13.461731 | orchestrator | 2026-01-03 01:42:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:13.463531 | orchestrator | 2026-01-03 01:42:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:13.463640 | orchestrator | 2026-01-03 01:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:16.503047 | orchestrator | 2026-01-03 01:42:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:16.505395 | orchestrator | 2026-01-03 01:42:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:16.505480 | orchestrator | 2026-01-03 01:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:19.554877 | orchestrator | 2026-01-03 01:42:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:19.556399 | orchestrator | 2026-01-03 01:42:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:19.556444 | orchestrator | 2026-01-03 01:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:22.601289 | orchestrator | 2026-01-03 01:42:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:22.603349 | orchestrator | 2026-01-03 01:42:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:22.603408 | orchestrator | 2026-01-03 01:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:25.651632 | orchestrator | 2026-01-03 01:42:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:25.652722 | orchestrator | 2026-01-03 01:42:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:25.652761 | orchestrator | 2026-01-03 01:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:28.696380 | orchestrator | 2026-01-03 01:42:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:28.698380 | orchestrator | 2026-01-03 01:42:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:28.698507 | orchestrator | 2026-01-03 01:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:31.741322 | orchestrator | 2026-01-03 01:42:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:31.743987 | orchestrator | 2026-01-03 01:42:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:31.744084 | orchestrator | 2026-01-03 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:34.790366 | orchestrator | 2026-01-03 01:42:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:34.792265 | orchestrator | 2026-01-03 01:42:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:34.792489 | orchestrator | 2026-01-03 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:37.833458 | orchestrator | 2026-01-03 01:42:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:37.835168 | orchestrator | 2026-01-03 01:42:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:37.835231 | orchestrator | 2026-01-03 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:40.880845 | orchestrator | 2026-01-03 01:42:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:40.882598 | orchestrator | 2026-01-03 01:42:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:40.882667 | orchestrator | 2026-01-03 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:43.932413 | orchestrator | 2026-01-03 01:42:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:43.935286 | orchestrator | 2026-01-03 01:42:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:43.935440 | orchestrator | 2026-01-03 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:46.984802 | orchestrator | 2026-01-03 01:42:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:46.986830 | orchestrator | 2026-01-03 01:42:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:46.986927 | orchestrator | 2026-01-03 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:50.049752 | orchestrator | 2026-01-03 01:42:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:50.051438 | orchestrator | 2026-01-03 01:42:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:50.051519 | orchestrator | 2026-01-03 01:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:53.093121 | orchestrator | 2026-01-03 01:42:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:53.093893 | orchestrator | 2026-01-03 01:42:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:53.093945 | orchestrator | 2026-01-03 01:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:56.140374 | orchestrator | 2026-01-03 01:42:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:56.142277 | orchestrator | 2026-01-03 01:42:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:56.142347 | orchestrator | 2026-01-03 01:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:59.186149 | orchestrator | 2026-01-03 01:42:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:42:59.187852 | orchestrator | 2026-01-03 01:42:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:42:59.187882 | orchestrator | 2026-01-03 01:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:02.232685 | orchestrator | 2026-01-03 01:43:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:02.234282 | orchestrator | 2026-01-03 01:43:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:02.234342 | orchestrator | 2026-01-03 01:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:05.289929 | orchestrator | 2026-01-03 01:43:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:05.291818 | orchestrator | 2026-01-03 01:43:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:05.291881 | orchestrator | 2026-01-03 01:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:08.337611 | orchestrator | 2026-01-03 01:43:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:08.339562 | orchestrator | 2026-01-03 01:43:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:08.339623 | orchestrator | 2026-01-03 01:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:11.382559 | orchestrator | 2026-01-03 01:43:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:11.384176 | orchestrator | 2026-01-03 01:43:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:11.384339 | orchestrator | 2026-01-03 01:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:14.435773 | orchestrator | 2026-01-03 01:43:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:14.437562 | orchestrator | 2026-01-03 01:43:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:14.437618 | orchestrator | 2026-01-03 01:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:17.480860 | orchestrator | 2026-01-03 01:43:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:17.483184 | orchestrator | 2026-01-03 01:43:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:17.483261 | orchestrator | 2026-01-03 01:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:20.533546 | orchestrator | 2026-01-03 01:43:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:20.533641 | orchestrator | 2026-01-03 01:43:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:20.533679 | orchestrator | 2026-01-03 01:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:23.576175 | orchestrator | 2026-01-03 01:43:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:23.578430 | orchestrator | 2026-01-03 01:43:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:23.578482 | orchestrator | 2026-01-03 01:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:26.624128 | orchestrator | 2026-01-03 01:43:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:26.625932 | orchestrator | 2026-01-03 01:43:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:26.626003 | orchestrator | 2026-01-03 01:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:29.673473 | orchestrator | 2026-01-03 01:43:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:29.675367 | orchestrator | 2026-01-03 01:43:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:29.675451 | orchestrator | 2026-01-03 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:32.718193 | orchestrator | 2026-01-03 01:43:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:32.719362 | orchestrator | 2026-01-03 01:43:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:32.719509 | orchestrator | 2026-01-03 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:35.764744 | orchestrator | 2026-01-03 01:43:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:35.766379 | orchestrator | 2026-01-03 01:43:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:35.766437 | orchestrator | 2026-01-03 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:38.809858 | orchestrator | 2026-01-03 01:43:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:38.812171 | orchestrator | 2026-01-03 01:43:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:38.812248 | orchestrator | 2026-01-03 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:41.859423 | orchestrator | 2026-01-03 01:43:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:41.860523 | orchestrator | 2026-01-03 01:43:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:41.860569 | orchestrator | 2026-01-03 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:44.910431 | orchestrator | 2026-01-03 01:43:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:44.912347 | orchestrator | 2026-01-03 01:43:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:44.912515 | orchestrator | 2026-01-03 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:47.957208 | orchestrator | 2026-01-03 01:43:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:47.959156 | orchestrator | 2026-01-03 01:43:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:47.959229 | orchestrator | 2026-01-03 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:51.005800 | orchestrator | 2026-01-03 01:43:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:51.007285 | orchestrator | 2026-01-03 01:43:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:51.007331 | orchestrator | 2026-01-03 01:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:54.053935 | orchestrator | 2026-01-03 01:43:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:54.056306 | orchestrator | 2026-01-03 01:43:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:54.056429 | orchestrator | 2026-01-03 01:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:57.108437 | orchestrator | 2026-01-03 01:43:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:43:57.110894 | orchestrator | 2026-01-03 01:43:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:43:57.111011 | orchestrator | 2026-01-03 01:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:00.154678 | orchestrator | 2026-01-03 01:44:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:00.157367 | orchestrator | 2026-01-03 01:44:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:00.157449 | orchestrator | 2026-01-03 01:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:03.198157 | orchestrator | 2026-01-03 01:44:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:03.200292 | orchestrator | 2026-01-03 01:44:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:03.200336 | orchestrator | 2026-01-03 01:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:06.246502 | orchestrator | 2026-01-03 01:44:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:06.248737 | orchestrator | 2026-01-03 01:44:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:06.248801 | orchestrator | 2026-01-03 01:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:09.289462 | orchestrator | 2026-01-03 01:44:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:09.291588 | orchestrator | 2026-01-03 01:44:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:09.291815 | orchestrator | 2026-01-03 01:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:12.331568 | orchestrator | 2026-01-03 01:44:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:12.333001 | orchestrator | 2026-01-03 01:44:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:12.333061 | orchestrator | 2026-01-03 01:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:15.386992 | orchestrator | 2026-01-03 01:44:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:15.390113 | orchestrator | 2026-01-03 01:44:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:15.390178 | orchestrator | 2026-01-03 01:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:18.437015 | orchestrator | 2026-01-03 01:44:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:18.438683 | orchestrator | 2026-01-03 01:44:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:18.438732 | orchestrator | 2026-01-03 01:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:21.490694 | orchestrator | 2026-01-03 01:44:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:21.493519 | orchestrator | 2026-01-03 01:44:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:21.493604 | orchestrator | 2026-01-03 01:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:24.540803 | orchestrator | 2026-01-03 01:44:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:24.544764 | orchestrator | 2026-01-03 01:44:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:24.544855 | orchestrator | 2026-01-03 01:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:27.593534 | orchestrator | 2026-01-03 01:44:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:27.595135 | orchestrator | 2026-01-03 01:44:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:27.595181 | orchestrator | 2026-01-03 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:30.640240 | orchestrator | 2026-01-03 01:44:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:30.739373 | orchestrator | 2026-01-03 01:44:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:30.739440 | orchestrator | 2026-01-03 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:33.692771 | orchestrator | 2026-01-03 01:44:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:33.694325 | orchestrator | 2026-01-03 01:44:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:33.694401 | orchestrator | 2026-01-03 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:36.749411 | orchestrator | 2026-01-03 01:44:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:36.751755 | orchestrator | 2026-01-03 01:44:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:36.751951 | orchestrator | 2026-01-03 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:39.798392 | orchestrator | 2026-01-03 01:44:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:39.801617 | orchestrator | 2026-01-03 01:44:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:39.801687 | orchestrator | 2026-01-03 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:42.848240 | orchestrator | 2026-01-03 01:44:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:42.852154 | orchestrator | 2026-01-03 01:44:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:42.852404 | orchestrator | 2026-01-03 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:45.900657 | orchestrator | 2026-01-03 01:44:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:45.903642 | orchestrator | 2026-01-03 01:44:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:45.903704 | orchestrator | 2026-01-03 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:48.945354 | orchestrator | 2026-01-03 01:44:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:48.948195 | orchestrator | 2026-01-03 01:44:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:48.948306 | orchestrator | 2026-01-03 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:51.994136 | orchestrator | 2026-01-03 01:44:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:51.995422 | orchestrator | 2026-01-03 01:44:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:51.995456 | orchestrator | 2026-01-03 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:55.060722 | orchestrator | 2026-01-03 01:44:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:55.062197 | orchestrator | 2026-01-03 01:44:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:55.062397 | orchestrator | 2026-01-03 01:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:58.114372 | orchestrator | 2026-01-03 01:44:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:44:58.116689 | orchestrator | 2026-01-03 01:44:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:44:58.116780 | orchestrator | 2026-01-03 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:01.159263 | orchestrator | 2026-01-03 01:45:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:01.161210 | orchestrator | 2026-01-03 01:45:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:01.161256 | orchestrator | 2026-01-03 01:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:04.209361 | orchestrator | 2026-01-03 01:45:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:04.215058 | orchestrator | 2026-01-03 01:45:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:04.215138 | orchestrator | 2026-01-03 01:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:07.257679 | orchestrator | 2026-01-03 01:45:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:07.259675 | orchestrator | 2026-01-03 01:45:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:07.259738 | orchestrator | 2026-01-03 01:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:10.305638 | orchestrator | 2026-01-03 01:45:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:10.307531 | orchestrator | 2026-01-03 01:45:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:10.307597 | orchestrator | 2026-01-03 01:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:13.353592 | orchestrator | 2026-01-03 01:45:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:13.355355 | orchestrator | 2026-01-03 01:45:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:13.355703 | orchestrator | 2026-01-03 01:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:16.398524 | orchestrator | 2026-01-03 01:45:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:16.399881 | orchestrator | 2026-01-03 01:45:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:16.399968 | orchestrator | 2026-01-03 01:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:19.445229 | orchestrator | 2026-01-03 01:45:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:19.447009 | orchestrator | 2026-01-03 01:45:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:19.447071 | orchestrator | 2026-01-03 01:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:22.495354 | orchestrator | 2026-01-03 01:45:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:22.497504 | orchestrator | 2026-01-03 01:45:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:22.497594 | orchestrator | 2026-01-03 01:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:25.541240 | orchestrator | 2026-01-03 01:45:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:25.542251 | orchestrator | 2026-01-03 01:45:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:25.542291 | orchestrator | 2026-01-03 01:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:28.588253 | orchestrator | 2026-01-03 01:45:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:28.589903 | orchestrator | 2026-01-03 01:45:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:28.589980 | orchestrator | 2026-01-03 01:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:31.634235 | orchestrator | 2026-01-03 01:45:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:31.636097 | orchestrator | 2026-01-03 01:45:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:31.636239 | orchestrator | 2026-01-03 01:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:34.682227 | orchestrator | 2026-01-03 01:45:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:34.683434 | orchestrator | 2026-01-03 01:45:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:34.683729 | orchestrator | 2026-01-03 01:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:37.728696 | orchestrator | 2026-01-03 01:45:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:37.730596 | orchestrator | 2026-01-03 01:45:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:37.730657 | orchestrator | 2026-01-03 01:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:40.775126 | orchestrator | 2026-01-03 01:45:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:40.777664 | orchestrator | 2026-01-03 01:45:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:40.777707 | orchestrator | 2026-01-03 01:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:43.821375 | orchestrator | 2026-01-03 01:45:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:43.823169 | orchestrator | 2026-01-03 01:45:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:43.823214 | orchestrator | 2026-01-03 01:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:46.865995 | orchestrator | 2026-01-03 01:45:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:46.867581 | orchestrator | 2026-01-03 01:45:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:46.867625 | orchestrator | 2026-01-03 01:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:49.915977 | orchestrator | 2026-01-03 01:45:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:49.918743 | orchestrator | 2026-01-03 01:45:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:49.918795 | orchestrator | 2026-01-03 01:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:52.964327 | orchestrator | 2026-01-03 01:45:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:52.966041 | orchestrator | 2026-01-03 01:45:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:52.966147 | orchestrator | 2026-01-03 01:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:56.007061 | orchestrator | 2026-01-03 01:45:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:56.008631 | orchestrator | 2026-01-03 01:45:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:56.008729 | orchestrator | 2026-01-03 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:59.046658 | orchestrator | 2026-01-03 01:45:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:45:59.048018 | orchestrator | 2026-01-03 01:45:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:45:59.048063 | orchestrator | 2026-01-03 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:02.092325 | orchestrator | 2026-01-03 01:46:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:02.094243 | orchestrator | 2026-01-03 01:46:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:02.094322 | orchestrator | 2026-01-03 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:05.126240 | orchestrator | 2026-01-03 01:46:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:05.127555 | orchestrator | 2026-01-03 01:46:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:05.127653 | orchestrator | 2026-01-03 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:08.179418 | orchestrator | 2026-01-03 01:46:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:08.181526 | orchestrator | 2026-01-03 01:46:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:08.181578 | orchestrator | 2026-01-03 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:11.226544 | orchestrator | 2026-01-03 01:46:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:11.228894 | orchestrator | 2026-01-03 01:46:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:11.229024 | orchestrator | 2026-01-03 01:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:14.275388 | orchestrator | 2026-01-03 01:46:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:14.276754 | orchestrator | 2026-01-03 01:46:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:14.277206 | orchestrator | 2026-01-03 01:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:17.323639 | orchestrator | 2026-01-03 01:46:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:17.325775 | orchestrator | 2026-01-03 01:46:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:17.325840 | orchestrator | 2026-01-03 01:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:20.372110 | orchestrator | 2026-01-03 01:46:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:20.374053 | orchestrator | 2026-01-03 01:46:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:20.374172 | orchestrator | 2026-01-03 01:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:23.420623 | orchestrator | 2026-01-03 01:46:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:23.422513 | orchestrator | 2026-01-03 01:46:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:23.422595 | orchestrator | 2026-01-03 01:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:26.465811 | orchestrator | 2026-01-03 01:46:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:26.467915 | orchestrator | 2026-01-03 01:46:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:26.468009 | orchestrator | 2026-01-03 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:29.511378 | orchestrator | 2026-01-03 01:46:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:29.513344 | orchestrator | 2026-01-03 01:46:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:29.513400 | orchestrator | 2026-01-03 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:32.560911 | orchestrator | 2026-01-03 01:46:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:32.563642 | orchestrator | 2026-01-03 01:46:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:32.563696 | orchestrator | 2026-01-03 01:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:35.605808 | orchestrator | 2026-01-03 01:46:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:35.607349 | orchestrator | 2026-01-03 01:46:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:35.607414 | orchestrator | 2026-01-03 01:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:38.654230 | orchestrator | 2026-01-03 01:46:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:38.656476 | orchestrator | 2026-01-03 01:46:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:38.656528 | orchestrator | 2026-01-03 01:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:41.700256 | orchestrator | 2026-01-03 01:46:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:41.702301 | orchestrator | 2026-01-03 01:46:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:41.702349 | orchestrator | 2026-01-03 01:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:44.747320 | orchestrator | 2026-01-03 01:46:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:44.748725 | orchestrator | 2026-01-03 01:46:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:44.748781 | orchestrator | 2026-01-03 01:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:47.795963 | orchestrator | 2026-01-03 01:46:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:47.797637 | orchestrator | 2026-01-03 01:46:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:47.797773 | orchestrator | 2026-01-03 01:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:50.842458 | orchestrator | 2026-01-03 01:46:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:50.844770 | orchestrator | 2026-01-03 01:46:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:50.844885 | orchestrator | 2026-01-03 01:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:53.895622 | orchestrator | 2026-01-03 01:46:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:53.897643 | orchestrator | 2026-01-03 01:46:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:53.897681 | orchestrator | 2026-01-03 01:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:56.937459 | orchestrator | 2026-01-03 01:46:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:56.938441 | orchestrator | 2026-01-03 01:46:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:56.938486 | orchestrator | 2026-01-03 01:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:59.981619 | orchestrator | 2026-01-03 01:46:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:46:59.983619 | orchestrator | 2026-01-03 01:46:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:46:59.983670 | orchestrator | 2026-01-03 01:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:03.025715 | orchestrator | 2026-01-03 01:47:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:03.025795 | orchestrator | 2026-01-03 01:47:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:03.025804 | orchestrator | 2026-01-03 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:06.074852 | orchestrator | 2026-01-03 01:47:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:06.076420 | orchestrator | 2026-01-03 01:47:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:06.076507 | orchestrator | 2026-01-03 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:09.125688 | orchestrator | 2026-01-03 01:47:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:09.127549 | orchestrator | 2026-01-03 01:47:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:09.127597 | orchestrator | 2026-01-03 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:12.174386 | orchestrator | 2026-01-03 01:47:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:12.176108 | orchestrator | 2026-01-03 01:47:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:12.176258 | orchestrator | 2026-01-03 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:15.223266 | orchestrator | 2026-01-03 01:47:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:15.225657 | orchestrator | 2026-01-03 01:47:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:15.225724 | orchestrator | 2026-01-03 01:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:18.274643 | orchestrator | 2026-01-03 01:47:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:18.277062 | orchestrator | 2026-01-03 01:47:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:18.277146 | orchestrator | 2026-01-03 01:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:21.324033 | orchestrator | 2026-01-03 01:47:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:21.327589 | orchestrator | 2026-01-03 01:47:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:21.327655 | orchestrator | 2026-01-03 01:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:24.376562 | orchestrator | 2026-01-03 01:47:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:24.379239 | orchestrator | 2026-01-03 01:47:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:24.379290 | orchestrator | 2026-01-03 01:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:27.427148 | orchestrator | 2026-01-03 01:47:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:27.428081 | orchestrator | 2026-01-03 01:47:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:27.428148 | orchestrator | 2026-01-03 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:30.475726 | orchestrator | 2026-01-03 01:47:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:30.478657 | orchestrator | 2026-01-03 01:47:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:30.478835 | orchestrator | 2026-01-03 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:33.529514 | orchestrator | 2026-01-03 01:47:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:33.531516 | orchestrator | 2026-01-03 01:47:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:33.531618 | orchestrator | 2026-01-03 01:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:36.585923 | orchestrator | 2026-01-03 01:47:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:36.588898 | orchestrator | 2026-01-03 01:47:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:36.589061 | orchestrator | 2026-01-03 01:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:39.637470 | orchestrator | 2026-01-03 01:47:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:39.640684 | orchestrator | 2026-01-03 01:47:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:39.640755 | orchestrator | 2026-01-03 01:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:42.682433 | orchestrator | 2026-01-03 01:47:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:42.683046 | orchestrator | 2026-01-03 01:47:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:42.683148 | orchestrator | 2026-01-03 01:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:45.725410 | orchestrator | 2026-01-03 01:47:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:45.728184 | orchestrator | 2026-01-03 01:47:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:45.728291 | orchestrator | 2026-01-03 01:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:48.774128 | orchestrator | 2026-01-03 01:47:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:48.776126 | orchestrator | 2026-01-03 01:47:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:48.776264 | orchestrator | 2026-01-03 01:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:51.824657 | orchestrator | 2026-01-03 01:47:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:51.825721 | orchestrator | 2026-01-03 01:47:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:51.826304 | orchestrator | 2026-01-03 01:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:54.874599 | orchestrator | 2026-01-03 01:47:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:54.876219 | orchestrator | 2026-01-03 01:47:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:54.876327 | orchestrator | 2026-01-03 01:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:57.924723 | orchestrator | 2026-01-03 01:47:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:47:57.926605 | orchestrator | 2026-01-03 01:47:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:47:57.926659 | orchestrator | 2026-01-03 01:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:00.970308 | orchestrator | 2026-01-03 01:48:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:00.971826 | orchestrator | 2026-01-03 01:48:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:00.971878 | orchestrator | 2026-01-03 01:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:04.024604 | orchestrator | 2026-01-03 01:48:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:04.026104 | orchestrator | 2026-01-03 01:48:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:04.026151 | orchestrator | 2026-01-03 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:07.069151 | orchestrator | 2026-01-03 01:48:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:07.070250 | orchestrator | 2026-01-03 01:48:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:07.070334 | orchestrator | 2026-01-03 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:10.114227 | orchestrator | 2026-01-03 01:48:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:10.115538 | orchestrator | 2026-01-03 01:48:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:10.115597 | orchestrator | 2026-01-03 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:13.159054 | orchestrator | 2026-01-03 01:48:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:13.161265 | orchestrator | 2026-01-03 01:48:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:13.161542 | orchestrator | 2026-01-03 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:16.198916 | orchestrator | 2026-01-03 01:48:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:16.199730 | orchestrator | 2026-01-03 01:48:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:16.199761 | orchestrator | 2026-01-03 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:19.243210 | orchestrator | 2026-01-03 01:48:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:19.245699 | orchestrator | 2026-01-03 01:48:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:19.245751 | orchestrator | 2026-01-03 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:22.281344 | orchestrator | 2026-01-03 01:48:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:22.283688 | orchestrator | 2026-01-03 01:48:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:22.283808 | orchestrator | 2026-01-03 01:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:25.325275 | orchestrator | 2026-01-03 01:48:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:25.326561 | orchestrator | 2026-01-03 01:48:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:25.326759 | orchestrator | 2026-01-03 01:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:28.371379 | orchestrator | 2026-01-03 01:48:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:28.373282 | orchestrator | 2026-01-03 01:48:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:28.373779 | orchestrator | 2026-01-03 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:31.426160 | orchestrator | 2026-01-03 01:48:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:31.428118 | orchestrator | 2026-01-03 01:48:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:31.428504 | orchestrator | 2026-01-03 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:34.471765 | orchestrator | 2026-01-03 01:48:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:34.474479 | orchestrator | 2026-01-03 01:48:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:34.474547 | orchestrator | 2026-01-03 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:37.520625 | orchestrator | 2026-01-03 01:48:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:37.522901 | orchestrator | 2026-01-03 01:48:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:37.523032 | orchestrator | 2026-01-03 01:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:40.570555 | orchestrator | 2026-01-03 01:48:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:40.572674 | orchestrator | 2026-01-03 01:48:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:40.572805 | orchestrator | 2026-01-03 01:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:43.618741 | orchestrator | 2026-01-03 01:48:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:43.620083 | orchestrator | 2026-01-03 01:48:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:43.620351 | orchestrator | 2026-01-03 01:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:46.661556 | orchestrator | 2026-01-03 01:48:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:46.663376 | orchestrator | 2026-01-03 01:48:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:46.663740 | orchestrator | 2026-01-03 01:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:49.704927 | orchestrator | 2026-01-03 01:48:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:49.707041 | orchestrator | 2026-01-03 01:48:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:49.707097 | orchestrator | 2026-01-03 01:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:52.753350 | orchestrator | 2026-01-03 01:48:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:52.754573 | orchestrator | 2026-01-03 01:48:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:52.754780 | orchestrator | 2026-01-03 01:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:55.802550 | orchestrator | 2026-01-03 01:48:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:55.804611 | orchestrator | 2026-01-03 01:48:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:55.804737 | orchestrator | 2026-01-03 01:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:58.847414 | orchestrator | 2026-01-03 01:48:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:48:58.849105 | orchestrator | 2026-01-03 01:48:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:48:58.849146 | orchestrator | 2026-01-03 01:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:01.894459 | orchestrator | 2026-01-03 01:49:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:01.896365 | orchestrator | 2026-01-03 01:49:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:01.896437 | orchestrator | 2026-01-03 01:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:04.946331 | orchestrator | 2026-01-03 01:49:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:04.947891 | orchestrator | 2026-01-03 01:49:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:04.948033 | orchestrator | 2026-01-03 01:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:07.996229 | orchestrator | 2026-01-03 01:49:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:07.998171 | orchestrator | 2026-01-03 01:49:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:07.998245 | orchestrator | 2026-01-03 01:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:11.045794 | orchestrator | 2026-01-03 01:49:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:11.047119 | orchestrator | 2026-01-03 01:49:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:11.047153 | orchestrator | 2026-01-03 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:14.085525 | orchestrator | 2026-01-03 01:49:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:14.087466 | orchestrator | 2026-01-03 01:49:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:14.087511 | orchestrator | 2026-01-03 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:17.134668 | orchestrator | 2026-01-03 01:49:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:17.136544 | orchestrator | 2026-01-03 01:49:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:17.136663 | orchestrator | 2026-01-03 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:20.182245 | orchestrator | 2026-01-03 01:49:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:20.182819 | orchestrator | 2026-01-03 01:49:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:20.182861 | orchestrator | 2026-01-03 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:23.225183 | orchestrator | 2026-01-03 01:49:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:23.227706 | orchestrator | 2026-01-03 01:49:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:23.227893 | orchestrator | 2026-01-03 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:26.270801 | orchestrator | 2026-01-03 01:49:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:26.271874 | orchestrator | 2026-01-03 01:49:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:26.272355 | orchestrator | 2026-01-03 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:29.319794 | orchestrator | 2026-01-03 01:49:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:29.321170 | orchestrator | 2026-01-03 01:49:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:29.321283 | orchestrator | 2026-01-03 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:32.367903 | orchestrator | 2026-01-03 01:49:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:32.369681 | orchestrator | 2026-01-03 01:49:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:32.369763 | orchestrator | 2026-01-03 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:35.411399 | orchestrator | 2026-01-03 01:49:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:35.412551 | orchestrator | 2026-01-03 01:49:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:35.412705 | orchestrator | 2026-01-03 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:38.452600 | orchestrator | 2026-01-03 01:49:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:38.455277 | orchestrator | 2026-01-03 01:49:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:38.455371 | orchestrator | 2026-01-03 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:41.498852 | orchestrator | 2026-01-03 01:49:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:41.501737 | orchestrator | 2026-01-03 01:49:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:41.501987 | orchestrator | 2026-01-03 01:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:44.549517 | orchestrator | 2026-01-03 01:49:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:44.550807 | orchestrator | 2026-01-03 01:49:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:44.551069 | orchestrator | 2026-01-03 01:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:47.596914 | orchestrator | 2026-01-03 01:49:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:47.599879 | orchestrator | 2026-01-03 01:49:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:47.599952 | orchestrator | 2026-01-03 01:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:50.650399 | orchestrator | 2026-01-03 01:49:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:50.652227 | orchestrator | 2026-01-03 01:49:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:50.652276 | orchestrator | 2026-01-03 01:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:53.701483 | orchestrator | 2026-01-03 01:49:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:53.703098 | orchestrator | 2026-01-03 01:49:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:53.703153 | orchestrator | 2026-01-03 01:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:56.748300 | orchestrator | 2026-01-03 01:49:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:56.749441 | orchestrator | 2026-01-03 01:49:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:56.749477 | orchestrator | 2026-01-03 01:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:59.798493 | orchestrator | 2026-01-03 01:49:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:49:59.800722 | orchestrator | 2026-01-03 01:49:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:49:59.800758 | orchestrator | 2026-01-03 01:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:02.843946 | orchestrator | 2026-01-03 01:50:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:02.847042 | orchestrator | 2026-01-03 01:50:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:02.847164 | orchestrator | 2026-01-03 01:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:05.893998 | orchestrator | 2026-01-03 01:50:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:05.895684 | orchestrator | 2026-01-03 01:50:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:05.895723 | orchestrator | 2026-01-03 01:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:08.947448 | orchestrator | 2026-01-03 01:50:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:08.949171 | orchestrator | 2026-01-03 01:50:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:08.949415 | orchestrator | 2026-01-03 01:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:11.997999 | orchestrator | 2026-01-03 01:50:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:11.999580 | orchestrator | 2026-01-03 01:50:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:11.999633 | orchestrator | 2026-01-03 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:15.058666 | orchestrator | 2026-01-03 01:50:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:15.059968 | orchestrator | 2026-01-03 01:50:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:15.060033 | orchestrator | 2026-01-03 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:18.110001 | orchestrator | 2026-01-03 01:50:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:18.113187 | orchestrator | 2026-01-03 01:50:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:18.113265 | orchestrator | 2026-01-03 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:21.160675 | orchestrator | 2026-01-03 01:50:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:21.162086 | orchestrator | 2026-01-03 01:50:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:21.162135 | orchestrator | 2026-01-03 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:24.209464 | orchestrator | 2026-01-03 01:50:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:24.211723 | orchestrator | 2026-01-03 01:50:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:24.211782 | orchestrator | 2026-01-03 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:27.265091 | orchestrator | 2026-01-03 01:50:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:27.269071 | orchestrator | 2026-01-03 01:50:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:27.269177 | orchestrator | 2026-01-03 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:30.321995 | orchestrator | 2026-01-03 01:50:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:30.325097 | orchestrator | 2026-01-03 01:50:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:30.325418 | orchestrator | 2026-01-03 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:33.371527 | orchestrator | 2026-01-03 01:50:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:33.374200 | orchestrator | 2026-01-03 01:50:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:33.374273 | orchestrator | 2026-01-03 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:36.419802 | orchestrator | 2026-01-03 01:50:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:36.420486 | orchestrator | 2026-01-03 01:50:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:36.420533 | orchestrator | 2026-01-03 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:39.468529 | orchestrator | 2026-01-03 01:50:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:39.470212 | orchestrator | 2026-01-03 01:50:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:39.470257 | orchestrator | 2026-01-03 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:42.520277 | orchestrator | 2026-01-03 01:50:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:42.521861 | orchestrator | 2026-01-03 01:50:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:42.521957 | orchestrator | 2026-01-03 01:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:45.568454 | orchestrator | 2026-01-03 01:50:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:45.571524 | orchestrator | 2026-01-03 01:50:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:45.571597 | orchestrator | 2026-01-03 01:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:48.618794 | orchestrator | 2026-01-03 01:50:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:48.620223 | orchestrator | 2026-01-03 01:50:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:48.620334 | orchestrator | 2026-01-03 01:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:51.666663 | orchestrator | 2026-01-03 01:50:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:51.668710 | orchestrator | 2026-01-03 01:50:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:51.668753 | orchestrator | 2026-01-03 01:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:54.712970 | orchestrator | 2026-01-03 01:50:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:54.715434 | orchestrator | 2026-01-03 01:50:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:54.715583 | orchestrator | 2026-01-03 01:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:57.763321 | orchestrator | 2026-01-03 01:50:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:50:57.765434 | orchestrator | 2026-01-03 01:50:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:50:57.765521 | orchestrator | 2026-01-03 01:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:00.816054 | orchestrator | 2026-01-03 01:51:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:00.817878 | orchestrator | 2026-01-03 01:51:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:00.817938 | orchestrator | 2026-01-03 01:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:03.862540 | orchestrator | 2026-01-03 01:51:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:03.863905 | orchestrator | 2026-01-03 01:51:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:03.863932 | orchestrator | 2026-01-03 01:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:06.909337 | orchestrator | 2026-01-03 01:51:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:06.910783 | orchestrator | 2026-01-03 01:51:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:06.910856 | orchestrator | 2026-01-03 01:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:09.956638 | orchestrator | 2026-01-03 01:51:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:09.958228 | orchestrator | 2026-01-03 01:51:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:09.958291 | orchestrator | 2026-01-03 01:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:12.999257 | orchestrator | 2026-01-03 01:51:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:13.005552 | orchestrator | 2026-01-03 01:51:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:13.005673 | orchestrator | 2026-01-03 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:16.050087 | orchestrator | 2026-01-03 01:51:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:16.052523 | orchestrator | 2026-01-03 01:51:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:16.052579 | orchestrator | 2026-01-03 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:19.092963 | orchestrator | 2026-01-03 01:51:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:19.095342 | orchestrator | 2026-01-03 01:51:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:19.095489 | orchestrator | 2026-01-03 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:22.142248 | orchestrator | 2026-01-03 01:51:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:22.144815 | orchestrator | 2026-01-03 01:51:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:22.144921 | orchestrator | 2026-01-03 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:25.186857 | orchestrator | 2026-01-03 01:51:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:25.188674 | orchestrator | 2026-01-03 01:51:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:25.188740 | orchestrator | 2026-01-03 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:28.231923 | orchestrator | 2026-01-03 01:51:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:28.233452 | orchestrator | 2026-01-03 01:51:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:28.233515 | orchestrator | 2026-01-03 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:31.273685 | orchestrator | 2026-01-03 01:51:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:31.275378 | orchestrator | 2026-01-03 01:51:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:31.275413 | orchestrator | 2026-01-03 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:34.316234 | orchestrator | 2026-01-03 01:51:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:34.317990 | orchestrator | 2026-01-03 01:51:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:34.318168 | orchestrator | 2026-01-03 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:37.364403 | orchestrator | 2026-01-03 01:51:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:37.366888 | orchestrator | 2026-01-03 01:51:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:37.366971 | orchestrator | 2026-01-03 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:40.409601 | orchestrator | 2026-01-03 01:51:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:40.413315 | orchestrator | 2026-01-03 01:51:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:40.413386 | orchestrator | 2026-01-03 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:43.469804 | orchestrator | 2026-01-03 01:51:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:43.471690 | orchestrator | 2026-01-03 01:51:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:43.471743 | orchestrator | 2026-01-03 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:46.515592 | orchestrator | 2026-01-03 01:51:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:46.518446 | orchestrator | 2026-01-03 01:51:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:46.518516 | orchestrator | 2026-01-03 01:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:49.563933 | orchestrator | 2026-01-03 01:51:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:49.568111 | orchestrator | 2026-01-03 01:51:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:49.568218 | orchestrator | 2026-01-03 01:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:52.618917 | orchestrator | 2026-01-03 01:51:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:52.620526 | orchestrator | 2026-01-03 01:51:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:52.620589 | orchestrator | 2026-01-03 01:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:55.671151 | orchestrator | 2026-01-03 01:51:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:55.672743 | orchestrator | 2026-01-03 01:51:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:55.672795 | orchestrator | 2026-01-03 01:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:58.718931 | orchestrator | 2026-01-03 01:51:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:51:58.720929 | orchestrator | 2026-01-03 01:51:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:51:58.720963 | orchestrator | 2026-01-03 01:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:01.770741 | orchestrator | 2026-01-03 01:52:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:01.772671 | orchestrator | 2026-01-03 01:52:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:01.772720 | orchestrator | 2026-01-03 01:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:04.812942 | orchestrator | 2026-01-03 01:52:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:04.814918 | orchestrator | 2026-01-03 01:52:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:04.815061 | orchestrator | 2026-01-03 01:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:07.863492 | orchestrator | 2026-01-03 01:52:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:07.864721 | orchestrator | 2026-01-03 01:52:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:07.864812 | orchestrator | 2026-01-03 01:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:10.907622 | orchestrator | 2026-01-03 01:52:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:10.910247 | orchestrator | 2026-01-03 01:52:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:10.910356 | orchestrator | 2026-01-03 01:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:13.956888 | orchestrator | 2026-01-03 01:52:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:13.958293 | orchestrator | 2026-01-03 01:52:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:13.958331 | orchestrator | 2026-01-03 01:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:17.005466 | orchestrator | 2026-01-03 01:52:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:17.006972 | orchestrator | 2026-01-03 01:52:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:17.007078 | orchestrator | 2026-01-03 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:20.051808 | orchestrator | 2026-01-03 01:52:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:20.053909 | orchestrator | 2026-01-03 01:52:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:20.053966 | orchestrator | 2026-01-03 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:23.104939 | orchestrator | 2026-01-03 01:52:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:23.107180 | orchestrator | 2026-01-03 01:52:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:23.107229 | orchestrator | 2026-01-03 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:26.150561 | orchestrator | 2026-01-03 01:52:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:26.153191 | orchestrator | 2026-01-03 01:52:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:26.153243 | orchestrator | 2026-01-03 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:29.199508 | orchestrator | 2026-01-03 01:52:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:29.200780 | orchestrator | 2026-01-03 01:52:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:29.200827 | orchestrator | 2026-01-03 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:32.248727 | orchestrator | 2026-01-03 01:52:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:32.250729 | orchestrator | 2026-01-03 01:52:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:32.250791 | orchestrator | 2026-01-03 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:35.295718 | orchestrator | 2026-01-03 01:52:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:35.298375 | orchestrator | 2026-01-03 01:52:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:35.298448 | orchestrator | 2026-01-03 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:38.347412 | orchestrator | 2026-01-03 01:52:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:38.349459 | orchestrator | 2026-01-03 01:52:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:38.349525 | orchestrator | 2026-01-03 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:41.394340 | orchestrator | 2026-01-03 01:52:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:41.395853 | orchestrator | 2026-01-03 01:52:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:41.396177 | orchestrator | 2026-01-03 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:44.446314 | orchestrator | 2026-01-03 01:52:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:44.448095 | orchestrator | 2026-01-03 01:52:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:44.448166 | orchestrator | 2026-01-03 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:47.493944 | orchestrator | 2026-01-03 01:52:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:47.496135 | orchestrator | 2026-01-03 01:52:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:47.496225 | orchestrator | 2026-01-03 01:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:50.536157 | orchestrator | 2026-01-03 01:52:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:50.536920 | orchestrator | 2026-01-03 01:52:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:50.536941 | orchestrator | 2026-01-03 01:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:53.578400 | orchestrator | 2026-01-03 01:52:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:53.579652 | orchestrator | 2026-01-03 01:52:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:53.579697 | orchestrator | 2026-01-03 01:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:56.620366 | orchestrator | 2026-01-03 01:52:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:56.621232 | orchestrator | 2026-01-03 01:52:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:56.621310 | orchestrator | 2026-01-03 01:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:59.668841 | orchestrator | 2026-01-03 01:52:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:52:59.670827 | orchestrator | 2026-01-03 01:52:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:52:59.670934 | orchestrator | 2026-01-03 01:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:02.718191 | orchestrator | 2026-01-03 01:53:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:02.719765 | orchestrator | 2026-01-03 01:53:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:02.719982 | orchestrator | 2026-01-03 01:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:05.766900 | orchestrator | 2026-01-03 01:53:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:05.768461 | orchestrator | 2026-01-03 01:53:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:05.768505 | orchestrator | 2026-01-03 01:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:08.816959 | orchestrator | 2026-01-03 01:53:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:08.819053 | orchestrator | 2026-01-03 01:53:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:08.819137 | orchestrator | 2026-01-03 01:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:11.863311 | orchestrator | 2026-01-03 01:53:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:11.865448 | orchestrator | 2026-01-03 01:53:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:11.865503 | orchestrator | 2026-01-03 01:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:14.909238 | orchestrator | 2026-01-03 01:53:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:14.910542 | orchestrator | 2026-01-03 01:53:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:14.910588 | orchestrator | 2026-01-03 01:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:17.955147 | orchestrator | 2026-01-03 01:53:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:17.956975 | orchestrator | 2026-01-03 01:53:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:17.957045 | orchestrator | 2026-01-03 01:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:21.003890 | orchestrator | 2026-01-03 01:53:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:21.005726 | orchestrator | 2026-01-03 01:53:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:21.005956 | orchestrator | 2026-01-03 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:24.046880 | orchestrator | 2026-01-03 01:53:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:24.048090 | orchestrator | 2026-01-03 01:53:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:24.048118 | orchestrator | 2026-01-03 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:27.097538 | orchestrator | 2026-01-03 01:53:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:27.099118 | orchestrator | 2026-01-03 01:53:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:27.099187 | orchestrator | 2026-01-03 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:30.145907 | orchestrator | 2026-01-03 01:53:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:30.147316 | orchestrator | 2026-01-03 01:53:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:30.147381 | orchestrator | 2026-01-03 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:33.194207 | orchestrator | 2026-01-03 01:53:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:33.195308 | orchestrator | 2026-01-03 01:53:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:33.195507 | orchestrator | 2026-01-03 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:36.253729 | orchestrator | 2026-01-03 01:53:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:36.255183 | orchestrator | 2026-01-03 01:53:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:36.255231 | orchestrator | 2026-01-03 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:39.303535 | orchestrator | 2026-01-03 01:53:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:39.305646 | orchestrator | 2026-01-03 01:53:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:39.305719 | orchestrator | 2026-01-03 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:42.357788 | orchestrator | 2026-01-03 01:53:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:42.359075 | orchestrator | 2026-01-03 01:53:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:42.359117 | orchestrator | 2026-01-03 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:45.404162 | orchestrator | 2026-01-03 01:53:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:45.405035 | orchestrator | 2026-01-03 01:53:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:45.405223 | orchestrator | 2026-01-03 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:48.452554 | orchestrator | 2026-01-03 01:53:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:48.454341 | orchestrator | 2026-01-03 01:53:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:48.454404 | orchestrator | 2026-01-03 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:51.503298 | orchestrator | 2026-01-03 01:53:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:51.505932 | orchestrator | 2026-01-03 01:53:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:51.506228 | orchestrator | 2026-01-03 01:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:54.554322 | orchestrator | 2026-01-03 01:53:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:54.555687 | orchestrator | 2026-01-03 01:53:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:54.555801 | orchestrator | 2026-01-03 01:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:57.598882 | orchestrator | 2026-01-03 01:53:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:53:57.600730 | orchestrator | 2026-01-03 01:53:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:53:57.600798 | orchestrator | 2026-01-03 01:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:00.640492 | orchestrator | 2026-01-03 01:54:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:00.642220 | orchestrator | 2026-01-03 01:54:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:00.642300 | orchestrator | 2026-01-03 01:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:03.688537 | orchestrator | 2026-01-03 01:54:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:03.689655 | orchestrator | 2026-01-03 01:54:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:03.689747 | orchestrator | 2026-01-03 01:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:06.731933 | orchestrator | 2026-01-03 01:54:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:06.733249 | orchestrator | 2026-01-03 01:54:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:06.733329 | orchestrator | 2026-01-03 01:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:09.779915 | orchestrator | 2026-01-03 01:54:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:09.781362 | orchestrator | 2026-01-03 01:54:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:09.781486 | orchestrator | 2026-01-03 01:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:12.825790 | orchestrator | 2026-01-03 01:54:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:12.827927 | orchestrator | 2026-01-03 01:54:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:12.828169 | orchestrator | 2026-01-03 01:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:15.873909 | orchestrator | 2026-01-03 01:54:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:15.875679 | orchestrator | 2026-01-03 01:54:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:15.875768 | orchestrator | 2026-01-03 01:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:18.922311 | orchestrator | 2026-01-03 01:54:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:18.924716 | orchestrator | 2026-01-03 01:54:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:18.924776 | orchestrator | 2026-01-03 01:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:21.964181 | orchestrator | 2026-01-03 01:54:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:21.966290 | orchestrator | 2026-01-03 01:54:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:21.966394 | orchestrator | 2026-01-03 01:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:25.012485 | orchestrator | 2026-01-03 01:54:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:25.014719 | orchestrator | 2026-01-03 01:54:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:25.014786 | orchestrator | 2026-01-03 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:28.062268 | orchestrator | 2026-01-03 01:54:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:28.063779 | orchestrator | 2026-01-03 01:54:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:28.063885 | orchestrator | 2026-01-03 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:31.110631 | orchestrator | 2026-01-03 01:54:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:31.111857 | orchestrator | 2026-01-03 01:54:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:31.111955 | orchestrator | 2026-01-03 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:34.152824 | orchestrator | 2026-01-03 01:54:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:34.155911 | orchestrator | 2026-01-03 01:54:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:34.156047 | orchestrator | 2026-01-03 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:37.200436 | orchestrator | 2026-01-03 01:54:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:37.203369 | orchestrator | 2026-01-03 01:54:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:37.203468 | orchestrator | 2026-01-03 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:40.248546 | orchestrator | 2026-01-03 01:54:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:40.249246 | orchestrator | 2026-01-03 01:54:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:40.249292 | orchestrator | 2026-01-03 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:43.290137 | orchestrator | 2026-01-03 01:54:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:43.291818 | orchestrator | 2026-01-03 01:54:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:43.291881 | orchestrator | 2026-01-03 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:46.331191 | orchestrator | 2026-01-03 01:54:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:46.332660 | orchestrator | 2026-01-03 01:54:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:46.332782 | orchestrator | 2026-01-03 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:49.384110 | orchestrator | 2026-01-03 01:54:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:49.385933 | orchestrator | 2026-01-03 01:54:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:49.386112 | orchestrator | 2026-01-03 01:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:52.437651 | orchestrator | 2026-01-03 01:54:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:52.439389 | orchestrator | 2026-01-03 01:54:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:52.439528 | orchestrator | 2026-01-03 01:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:55.484792 | orchestrator | 2026-01-03 01:54:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:55.485684 | orchestrator | 2026-01-03 01:54:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:55.485716 | orchestrator | 2026-01-03 01:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:58.533359 | orchestrator | 2026-01-03 01:54:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:54:58.535910 | orchestrator | 2026-01-03 01:54:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:54:58.536079 | orchestrator | 2026-01-03 01:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:01.582704 | orchestrator | 2026-01-03 01:55:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:01.584385 | orchestrator | 2026-01-03 01:55:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:01.584447 | orchestrator | 2026-01-03 01:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:04.633761 | orchestrator | 2026-01-03 01:55:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:04.635658 | orchestrator | 2026-01-03 01:55:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:04.635714 | orchestrator | 2026-01-03 01:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:07.684114 | orchestrator | 2026-01-03 01:55:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:07.685458 | orchestrator | 2026-01-03 01:55:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:07.685519 | orchestrator | 2026-01-03 01:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:10.727414 | orchestrator | 2026-01-03 01:55:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:10.728899 | orchestrator | 2026-01-03 01:55:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:10.728932 | orchestrator | 2026-01-03 01:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:13.776454 | orchestrator | 2026-01-03 01:55:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:13.777787 | orchestrator | 2026-01-03 01:55:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:13.777827 | orchestrator | 2026-01-03 01:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:16.831431 | orchestrator | 2026-01-03 01:55:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:16.832564 | orchestrator | 2026-01-03 01:55:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:16.832608 | orchestrator | 2026-01-03 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:19.873622 | orchestrator | 2026-01-03 01:55:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:19.875355 | orchestrator | 2026-01-03 01:55:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:19.875430 | orchestrator | 2026-01-03 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:22.927345 | orchestrator | 2026-01-03 01:55:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:22.929164 | orchestrator | 2026-01-03 01:55:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:22.929242 | orchestrator | 2026-01-03 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:25.973512 | orchestrator | 2026-01-03 01:55:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:25.975808 | orchestrator | 2026-01-03 01:55:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:25.975862 | orchestrator | 2026-01-03 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:29.026216 | orchestrator | 2026-01-03 01:55:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:29.028051 | orchestrator | 2026-01-03 01:55:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:29.028130 | orchestrator | 2026-01-03 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:32.062652 | orchestrator | 2026-01-03 01:55:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:32.065075 | orchestrator | 2026-01-03 01:55:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:32.065241 | orchestrator | 2026-01-03 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:35.106351 | orchestrator | 2026-01-03 01:55:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:35.107873 | orchestrator | 2026-01-03 01:55:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:35.107921 | orchestrator | 2026-01-03 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:38.153635 | orchestrator | 2026-01-03 01:55:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:38.155474 | orchestrator | 2026-01-03 01:55:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:38.155585 | orchestrator | 2026-01-03 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:41.196949 | orchestrator | 2026-01-03 01:55:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:41.198853 | orchestrator | 2026-01-03 01:55:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:41.198916 | orchestrator | 2026-01-03 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:44.243454 | orchestrator | 2026-01-03 01:55:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:44.244805 | orchestrator | 2026-01-03 01:55:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:44.244834 | orchestrator | 2026-01-03 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:47.284009 | orchestrator | 2026-01-03 01:55:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:47.285698 | orchestrator | 2026-01-03 01:55:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:47.285743 | orchestrator | 2026-01-03 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:50.328454 | orchestrator | 2026-01-03 01:55:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:50.330531 | orchestrator | 2026-01-03 01:55:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:50.330615 | orchestrator | 2026-01-03 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:53.382430 | orchestrator | 2026-01-03 01:55:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:53.383716 | orchestrator | 2026-01-03 01:55:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:53.383790 | orchestrator | 2026-01-03 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:56.430090 | orchestrator | 2026-01-03 01:55:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:56.431734 | orchestrator | 2026-01-03 01:55:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:56.431781 | orchestrator | 2026-01-03 01:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:59.476505 | orchestrator | 2026-01-03 01:55:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:55:59.478260 | orchestrator | 2026-01-03 01:55:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:55:59.478335 | orchestrator | 2026-01-03 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:02.527410 | orchestrator | 2026-01-03 01:56:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:02.528625 | orchestrator | 2026-01-03 01:56:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:02.528655 | orchestrator | 2026-01-03 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:05.566592 | orchestrator | 2026-01-03 01:56:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:05.568118 | orchestrator | 2026-01-03 01:56:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:05.568239 | orchestrator | 2026-01-03 01:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:08.612681 | orchestrator | 2026-01-03 01:56:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:08.614291 | orchestrator | 2026-01-03 01:56:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:08.614357 | orchestrator | 2026-01-03 01:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:11.658457 | orchestrator | 2026-01-03 01:56:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:11.660313 | orchestrator | 2026-01-03 01:56:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:11.660382 | orchestrator | 2026-01-03 01:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:14.703908 | orchestrator | 2026-01-03 01:56:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:14.705879 | orchestrator | 2026-01-03 01:56:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:14.705928 | orchestrator | 2026-01-03 01:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:17.753811 | orchestrator | 2026-01-03 01:56:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:17.755471 | orchestrator | 2026-01-03 01:56:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:17.755725 | orchestrator | 2026-01-03 01:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:20.803123 | orchestrator | 2026-01-03 01:56:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:20.804380 | orchestrator | 2026-01-03 01:56:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:20.804396 | orchestrator | 2026-01-03 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:23.850127 | orchestrator | 2026-01-03 01:56:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:23.852149 | orchestrator | 2026-01-03 01:56:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:23.852200 | orchestrator | 2026-01-03 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:26.896864 | orchestrator | 2026-01-03 01:56:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:26.898197 | orchestrator | 2026-01-03 01:56:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:26.898244 | orchestrator | 2026-01-03 01:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:29.943218 | orchestrator | 2026-01-03 01:56:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:29.947405 | orchestrator | 2026-01-03 01:56:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:29.947462 | orchestrator | 2026-01-03 01:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:32.985367 | orchestrator | 2026-01-03 01:56:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:32.986718 | orchestrator | 2026-01-03 01:56:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:32.986762 | orchestrator | 2026-01-03 01:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:36.028654 | orchestrator | 2026-01-03 01:56:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:36.030386 | orchestrator | 2026-01-03 01:56:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:36.030430 | orchestrator | 2026-01-03 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:39.073598 | orchestrator | 2026-01-03 01:56:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:39.075279 | orchestrator | 2026-01-03 01:56:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:39.075345 | orchestrator | 2026-01-03 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:42.119887 | orchestrator | 2026-01-03 01:56:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:42.121757 | orchestrator | 2026-01-03 01:56:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:42.121842 | orchestrator | 2026-01-03 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:45.161690 | orchestrator | 2026-01-03 01:56:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:45.163040 | orchestrator | 2026-01-03 01:56:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:45.163100 | orchestrator | 2026-01-03 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:48.206319 | orchestrator | 2026-01-03 01:56:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:48.208816 | orchestrator | 2026-01-03 01:56:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:48.208893 | orchestrator | 2026-01-03 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:51.257395 | orchestrator | 2026-01-03 01:56:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:51.259513 | orchestrator | 2026-01-03 01:56:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:51.259612 | orchestrator | 2026-01-03 01:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:54.309171 | orchestrator | 2026-01-03 01:56:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:54.310763 | orchestrator | 2026-01-03 01:56:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:54.310828 | orchestrator | 2026-01-03 01:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:57.355533 | orchestrator | 2026-01-03 01:56:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:56:57.357193 | orchestrator | 2026-01-03 01:56:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:56:57.357262 | orchestrator | 2026-01-03 01:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:00.401385 | orchestrator | 2026-01-03 01:57:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:00.403838 | orchestrator | 2026-01-03 01:57:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:00.403907 | orchestrator | 2026-01-03 01:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:03.448716 | orchestrator | 2026-01-03 01:57:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:03.450329 | orchestrator | 2026-01-03 01:57:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:03.450431 | orchestrator | 2026-01-03 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:06.493220 | orchestrator | 2026-01-03 01:57:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:06.494832 | orchestrator | 2026-01-03 01:57:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:06.495109 | orchestrator | 2026-01-03 01:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:09.532892 | orchestrator | 2026-01-03 01:57:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:09.534917 | orchestrator | 2026-01-03 01:57:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:09.535007 | orchestrator | 2026-01-03 01:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:12.580083 | orchestrator | 2026-01-03 01:57:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:12.581354 | orchestrator | 2026-01-03 01:57:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:12.581455 | orchestrator | 2026-01-03 01:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:15.631891 | orchestrator | 2026-01-03 01:57:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:15.633898 | orchestrator | 2026-01-03 01:57:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:15.634065 | orchestrator | 2026-01-03 01:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:18.678727 | orchestrator | 2026-01-03 01:57:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:18.682500 | orchestrator | 2026-01-03 01:57:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:18.682587 | orchestrator | 2026-01-03 01:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:21.732840 | orchestrator | 2026-01-03 01:57:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:21.736547 | orchestrator | 2026-01-03 01:57:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:21.736656 | orchestrator | 2026-01-03 01:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:24.782830 | orchestrator | 2026-01-03 01:57:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:24.783408 | orchestrator | 2026-01-03 01:57:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:24.783451 | orchestrator | 2026-01-03 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:27.833429 | orchestrator | 2026-01-03 01:57:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:27.834607 | orchestrator | 2026-01-03 01:57:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:27.834657 | orchestrator | 2026-01-03 01:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:30.876368 | orchestrator | 2026-01-03 01:57:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:30.879150 | orchestrator | 2026-01-03 01:57:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:30.879202 | orchestrator | 2026-01-03 01:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:33.924741 | orchestrator | 2026-01-03 01:57:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:33.926209 | orchestrator | 2026-01-03 01:57:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:33.926410 | orchestrator | 2026-01-03 01:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:36.972775 | orchestrator | 2026-01-03 01:57:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:36.974292 | orchestrator | 2026-01-03 01:57:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:36.974333 | orchestrator | 2026-01-03 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:40.020928 | orchestrator | 2026-01-03 01:57:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:40.023127 | orchestrator | 2026-01-03 01:57:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:40.023232 | orchestrator | 2026-01-03 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:43.067257 | orchestrator | 2026-01-03 01:57:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:43.068994 | orchestrator | 2026-01-03 01:57:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:43.069056 | orchestrator | 2026-01-03 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:46.111778 | orchestrator | 2026-01-03 01:57:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:46.112739 | orchestrator | 2026-01-03 01:57:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:46.112781 | orchestrator | 2026-01-03 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:49.160431 | orchestrator | 2026-01-03 01:57:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:49.162549 | orchestrator | 2026-01-03 01:57:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:49.162593 | orchestrator | 2026-01-03 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:52.211874 | orchestrator | 2026-01-03 01:57:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:52.214223 | orchestrator | 2026-01-03 01:57:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:52.214303 | orchestrator | 2026-01-03 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:55.257167 | orchestrator | 2026-01-03 01:57:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:55.258494 | orchestrator | 2026-01-03 01:57:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:55.258536 | orchestrator | 2026-01-03 01:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:58.308716 | orchestrator | 2026-01-03 01:57:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:57:58.309811 | orchestrator | 2026-01-03 01:57:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:57:58.309869 | orchestrator | 2026-01-03 01:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:01.350573 | orchestrator | 2026-01-03 01:58:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:01.352483 | orchestrator | 2026-01-03 01:58:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:01.352548 | orchestrator | 2026-01-03 01:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:04.397787 | orchestrator | 2026-01-03 01:58:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:04.400032 | orchestrator | 2026-01-03 01:58:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:04.400097 | orchestrator | 2026-01-03 01:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:07.441923 | orchestrator | 2026-01-03 01:58:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:07.443230 | orchestrator | 2026-01-03 01:58:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:07.443321 | orchestrator | 2026-01-03 01:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:10.488699 | orchestrator | 2026-01-03 01:58:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:10.490794 | orchestrator | 2026-01-03 01:58:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:10.490853 | orchestrator | 2026-01-03 01:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:13.539483 | orchestrator | 2026-01-03 01:58:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:13.541209 | orchestrator | 2026-01-03 01:58:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:13.541268 | orchestrator | 2026-01-03 01:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:16.585944 | orchestrator | 2026-01-03 01:58:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:16.587986 | orchestrator | 2026-01-03 01:58:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:16.588038 | orchestrator | 2026-01-03 01:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:19.635263 | orchestrator | 2026-01-03 01:58:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:19.637179 | orchestrator | 2026-01-03 01:58:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:19.637249 | orchestrator | 2026-01-03 01:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:22.678389 | orchestrator | 2026-01-03 01:58:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:22.680044 | orchestrator | 2026-01-03 01:58:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:22.680198 | orchestrator | 2026-01-03 01:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:25.722857 | orchestrator | 2026-01-03 01:58:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:25.725169 | orchestrator | 2026-01-03 01:58:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:25.725233 | orchestrator | 2026-01-03 01:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:28.769494 | orchestrator | 2026-01-03 01:58:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:28.771184 | orchestrator | 2026-01-03 01:58:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:28.771262 | orchestrator | 2026-01-03 01:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:31.822068 | orchestrator | 2026-01-03 01:58:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:31.823788 | orchestrator | 2026-01-03 01:58:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:31.824196 | orchestrator | 2026-01-03 01:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:34.871380 | orchestrator | 2026-01-03 01:58:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:34.872600 | orchestrator | 2026-01-03 01:58:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:34.872647 | orchestrator | 2026-01-03 01:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:37.917834 | orchestrator | 2026-01-03 01:58:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:37.920068 | orchestrator | 2026-01-03 01:58:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:37.920133 | orchestrator | 2026-01-03 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:40.970809 | orchestrator | 2026-01-03 01:58:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:40.973969 | orchestrator | 2026-01-03 01:58:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:40.974108 | orchestrator | 2026-01-03 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:44.033065 | orchestrator | 2026-01-03 01:58:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:44.034496 | orchestrator | 2026-01-03 01:58:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:44.034550 | orchestrator | 2026-01-03 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:47.080141 | orchestrator | 2026-01-03 01:58:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:47.082348 | orchestrator | 2026-01-03 01:58:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:47.082453 | orchestrator | 2026-01-03 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:50.124434 | orchestrator | 2026-01-03 01:58:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:50.127390 | orchestrator | 2026-01-03 01:58:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:50.127456 | orchestrator | 2026-01-03 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:53.173224 | orchestrator | 2026-01-03 01:58:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:53.175116 | orchestrator | 2026-01-03 01:58:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:53.175163 | orchestrator | 2026-01-03 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:56.220518 | orchestrator | 2026-01-03 01:58:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:56.222385 | orchestrator | 2026-01-03 01:58:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:56.222428 | orchestrator | 2026-01-03 01:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:59.263679 | orchestrator | 2026-01-03 01:58:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:58:59.264829 | orchestrator | 2026-01-03 01:58:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:58:59.264898 | orchestrator | 2026-01-03 01:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:02.306835 | orchestrator | 2026-01-03 01:59:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:02.308757 | orchestrator | 2026-01-03 01:59:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:02.308874 | orchestrator | 2026-01-03 01:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:05.355091 | orchestrator | 2026-01-03 01:59:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:05.356832 | orchestrator | 2026-01-03 01:59:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:05.356972 | orchestrator | 2026-01-03 01:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:08.402149 | orchestrator | 2026-01-03 01:59:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:08.404844 | orchestrator | 2026-01-03 01:59:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:08.404913 | orchestrator | 2026-01-03 01:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:11.452285 | orchestrator | 2026-01-03 01:59:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:11.452850 | orchestrator | 2026-01-03 01:59:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:11.452961 | orchestrator | 2026-01-03 01:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:14.499811 | orchestrator | 2026-01-03 01:59:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:14.502196 | orchestrator | 2026-01-03 01:59:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:14.502254 | orchestrator | 2026-01-03 01:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:17.549093 | orchestrator | 2026-01-03 01:59:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:17.550812 | orchestrator | 2026-01-03 01:59:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:17.550950 | orchestrator | 2026-01-03 01:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:20.597492 | orchestrator | 2026-01-03 01:59:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:20.600225 | orchestrator | 2026-01-03 01:59:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:20.600343 | orchestrator | 2026-01-03 01:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:23.645192 | orchestrator | 2026-01-03 01:59:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:23.646314 | orchestrator | 2026-01-03 01:59:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:23.646582 | orchestrator | 2026-01-03 01:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:26.690884 | orchestrator | 2026-01-03 01:59:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:26.692796 | orchestrator | 2026-01-03 01:59:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:26.692866 | orchestrator | 2026-01-03 01:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:29.736150 | orchestrator | 2026-01-03 01:59:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:29.738326 | orchestrator | 2026-01-03 01:59:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:29.738383 | orchestrator | 2026-01-03 01:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:32.780454 | orchestrator | 2026-01-03 01:59:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:32.783217 | orchestrator | 2026-01-03 01:59:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:32.783358 | orchestrator | 2026-01-03 01:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:35.828247 | orchestrator | 2026-01-03 01:59:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:35.830471 | orchestrator | 2026-01-03 01:59:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:35.830509 | orchestrator | 2026-01-03 01:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:38.868228 | orchestrator | 2026-01-03 01:59:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:38.869903 | orchestrator | 2026-01-03 01:59:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:38.870074 | orchestrator | 2026-01-03 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:41.914075 | orchestrator | 2026-01-03 01:59:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:41.916487 | orchestrator | 2026-01-03 01:59:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:41.916545 | orchestrator | 2026-01-03 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:44.956775 | orchestrator | 2026-01-03 01:59:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:44.958350 | orchestrator | 2026-01-03 01:59:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:44.958419 | orchestrator | 2026-01-03 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:48.005289 | orchestrator | 2026-01-03 01:59:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:48.009333 | orchestrator | 2026-01-03 01:59:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:48.009432 | orchestrator | 2026-01-03 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:51.049618 | orchestrator | 2026-01-03 01:59:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:51.051744 | orchestrator | 2026-01-03 01:59:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:51.051829 | orchestrator | 2026-01-03 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:54.096680 | orchestrator | 2026-01-03 01:59:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:54.098767 | orchestrator | 2026-01-03 01:59:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:54.098821 | orchestrator | 2026-01-03 01:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:57.149865 | orchestrator | 2026-01-03 01:59:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 01:59:57.152170 | orchestrator | 2026-01-03 01:59:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 01:59:57.152236 | orchestrator | 2026-01-03 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:00.197967 | orchestrator | 2026-01-03 02:00:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:00.200289 | orchestrator | 2026-01-03 02:00:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:00.200371 | orchestrator | 2026-01-03 02:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:03.249924 | orchestrator | 2026-01-03 02:00:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:03.253138 | orchestrator | 2026-01-03 02:00:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:03.253483 | orchestrator | 2026-01-03 02:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:06.297525 | orchestrator | 2026-01-03 02:00:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:06.298361 | orchestrator | 2026-01-03 02:00:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:06.298450 | orchestrator | 2026-01-03 02:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:09.357019 | orchestrator | 2026-01-03 02:00:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:09.359425 | orchestrator | 2026-01-03 02:00:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:09.359492 | orchestrator | 2026-01-03 02:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:12.402426 | orchestrator | 2026-01-03 02:00:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:12.404193 | orchestrator | 2026-01-03 02:00:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:12.404838 | orchestrator | 2026-01-03 02:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:15.451045 | orchestrator | 2026-01-03 02:00:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:15.452821 | orchestrator | 2026-01-03 02:00:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:15.453225 | orchestrator | 2026-01-03 02:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:18.495608 | orchestrator | 2026-01-03 02:00:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:18.498958 | orchestrator | 2026-01-03 02:00:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:18.499080 | orchestrator | 2026-01-03 02:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:21.544408 | orchestrator | 2026-01-03 02:00:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:21.546852 | orchestrator | 2026-01-03 02:00:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:21.546939 | orchestrator | 2026-01-03 02:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:24.593403 | orchestrator | 2026-01-03 02:00:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:24.596030 | orchestrator | 2026-01-03 02:00:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:24.596112 | orchestrator | 2026-01-03 02:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:27.639939 | orchestrator | 2026-01-03 02:00:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:27.641659 | orchestrator | 2026-01-03 02:00:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:27.641791 | orchestrator | 2026-01-03 02:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:30.687054 | orchestrator | 2026-01-03 02:00:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:30.688693 | orchestrator | 2026-01-03 02:00:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:30.688749 | orchestrator | 2026-01-03 02:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:33.738110 | orchestrator | 2026-01-03 02:00:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:33.739931 | orchestrator | 2026-01-03 02:00:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:33.740442 | orchestrator | 2026-01-03 02:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:36.785258 | orchestrator | 2026-01-03 02:00:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:36.787837 | orchestrator | 2026-01-03 02:00:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:36.787927 | orchestrator | 2026-01-03 02:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:39.831444 | orchestrator | 2026-01-03 02:00:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:39.833626 | orchestrator | 2026-01-03 02:00:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:39.833672 | orchestrator | 2026-01-03 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:42.880411 | orchestrator | 2026-01-03 02:00:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:42.882337 | orchestrator | 2026-01-03 02:00:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:42.882429 | orchestrator | 2026-01-03 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:45.925130 | orchestrator | 2026-01-03 02:00:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:45.926849 | orchestrator | 2026-01-03 02:00:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:45.927007 | orchestrator | 2026-01-03 02:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:48.973642 | orchestrator | 2026-01-03 02:00:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:48.975180 | orchestrator | 2026-01-03 02:00:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:48.975237 | orchestrator | 2026-01-03 02:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:52.018305 | orchestrator | 2026-01-03 02:00:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:52.019487 | orchestrator | 2026-01-03 02:00:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:52.019531 | orchestrator | 2026-01-03 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:55.062246 | orchestrator | 2026-01-03 02:00:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:55.063783 | orchestrator | 2026-01-03 02:00:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:55.063825 | orchestrator | 2026-01-03 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:58.108705 | orchestrator | 2026-01-03 02:00:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:00:58.110362 | orchestrator | 2026-01-03 02:00:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:00:58.110411 | orchestrator | 2026-01-03 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:01.154123 | orchestrator | 2026-01-03 02:01:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:01.155958 | orchestrator | 2026-01-03 02:01:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:01.156025 | orchestrator | 2026-01-03 02:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:04.204004 | orchestrator | 2026-01-03 02:01:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:04.205953 | orchestrator | 2026-01-03 02:01:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:04.206102 | orchestrator | 2026-01-03 02:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:07.249036 | orchestrator | 2026-01-03 02:01:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:07.251341 | orchestrator | 2026-01-03 02:01:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:07.251394 | orchestrator | 2026-01-03 02:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:10.289506 | orchestrator | 2026-01-03 02:01:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:10.290386 | orchestrator | 2026-01-03 02:01:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:10.290417 | orchestrator | 2026-01-03 02:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:13.332110 | orchestrator | 2026-01-03 02:01:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:13.333955 | orchestrator | 2026-01-03 02:01:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:13.334101 | orchestrator | 2026-01-03 02:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:16.378273 | orchestrator | 2026-01-03 02:01:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:16.379816 | orchestrator | 2026-01-03 02:01:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:16.379927 | orchestrator | 2026-01-03 02:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:19.434960 | orchestrator | 2026-01-03 02:01:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:19.436725 | orchestrator | 2026-01-03 02:01:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:19.436800 | orchestrator | 2026-01-03 02:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:22.488314 | orchestrator | 2026-01-03 02:01:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:22.489811 | orchestrator | 2026-01-03 02:01:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:22.489878 | orchestrator | 2026-01-03 02:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:25.537267 | orchestrator | 2026-01-03 02:01:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:25.539643 | orchestrator | 2026-01-03 02:01:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:25.539719 | orchestrator | 2026-01-03 02:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:28.586271 | orchestrator | 2026-01-03 02:01:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:28.588040 | orchestrator | 2026-01-03 02:01:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:28.588083 | orchestrator | 2026-01-03 02:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:31.633657 | orchestrator | 2026-01-03 02:01:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:31.635212 | orchestrator | 2026-01-03 02:01:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:31.635420 | orchestrator | 2026-01-03 02:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:34.687819 | orchestrator | 2026-01-03 02:01:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:34.689371 | orchestrator | 2026-01-03 02:01:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:34.689427 | orchestrator | 2026-01-03 02:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:37.729208 | orchestrator | 2026-01-03 02:01:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:37.730899 | orchestrator | 2026-01-03 02:01:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:37.730948 | orchestrator | 2026-01-03 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:40.775633 | orchestrator | 2026-01-03 02:01:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:40.777274 | orchestrator | 2026-01-03 02:01:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:40.777382 | orchestrator | 2026-01-03 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:43.826426 | orchestrator | 2026-01-03 02:01:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:43.828183 | orchestrator | 2026-01-03 02:01:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:43.828243 | orchestrator | 2026-01-03 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:46.873955 | orchestrator | 2026-01-03 02:01:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:46.875797 | orchestrator | 2026-01-03 02:01:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:46.875854 | orchestrator | 2026-01-03 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:49.926551 | orchestrator | 2026-01-03 02:01:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:49.928289 | orchestrator | 2026-01-03 02:01:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:49.928347 | orchestrator | 2026-01-03 02:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:52.973458 | orchestrator | 2026-01-03 02:01:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:52.975450 | orchestrator | 2026-01-03 02:01:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:52.975508 | orchestrator | 2026-01-03 02:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:56.022863 | orchestrator | 2026-01-03 02:01:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:56.024422 | orchestrator | 2026-01-03 02:01:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:56.024997 | orchestrator | 2026-01-03 02:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:59.072601 | orchestrator | 2026-01-03 02:01:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:01:59.073790 | orchestrator | 2026-01-03 02:01:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:01:59.073855 | orchestrator | 2026-01-03 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:02.121347 | orchestrator | 2026-01-03 02:02:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:02.123433 | orchestrator | 2026-01-03 02:02:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:02.123515 | orchestrator | 2026-01-03 02:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:05.166088 | orchestrator | 2026-01-03 02:02:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:05.167545 | orchestrator | 2026-01-03 02:02:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:05.167619 | orchestrator | 2026-01-03 02:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:08.211285 | orchestrator | 2026-01-03 02:02:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:08.213859 | orchestrator | 2026-01-03 02:02:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:08.213973 | orchestrator | 2026-01-03 02:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:11.259705 | orchestrator | 2026-01-03 02:02:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:11.261776 | orchestrator | 2026-01-03 02:02:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:11.261884 | orchestrator | 2026-01-03 02:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:14.305725 | orchestrator | 2026-01-03 02:02:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:14.308021 | orchestrator | 2026-01-03 02:02:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:14.308093 | orchestrator | 2026-01-03 02:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:17.356436 | orchestrator | 2026-01-03 02:02:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:17.357229 | orchestrator | 2026-01-03 02:02:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:17.357343 | orchestrator | 2026-01-03 02:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:20.402229 | orchestrator | 2026-01-03 02:02:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:20.405346 | orchestrator | 2026-01-03 02:02:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:20.405412 | orchestrator | 2026-01-03 02:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:23.452012 | orchestrator | 2026-01-03 02:02:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:23.454063 | orchestrator | 2026-01-03 02:02:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:23.454129 | orchestrator | 2026-01-03 02:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:26.501091 | orchestrator | 2026-01-03 02:02:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:26.502652 | orchestrator | 2026-01-03 02:02:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:26.502994 | orchestrator | 2026-01-03 02:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:29.545744 | orchestrator | 2026-01-03 02:02:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:29.548094 | orchestrator | 2026-01-03 02:02:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:29.548164 | orchestrator | 2026-01-03 02:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:32.590362 | orchestrator | 2026-01-03 02:02:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:32.591799 | orchestrator | 2026-01-03 02:02:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:32.592619 | orchestrator | 2026-01-03 02:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:35.636115 | orchestrator | 2026-01-03 02:02:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:35.638265 | orchestrator | 2026-01-03 02:02:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:35.638327 | orchestrator | 2026-01-03 02:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:38.681632 | orchestrator | 2026-01-03 02:02:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:38.682770 | orchestrator | 2026-01-03 02:02:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:38.682803 | orchestrator | 2026-01-03 02:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:41.729065 | orchestrator | 2026-01-03 02:02:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:41.731421 | orchestrator | 2026-01-03 02:02:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:41.731484 | orchestrator | 2026-01-03 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:44.782573 | orchestrator | 2026-01-03 02:02:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:44.784618 | orchestrator | 2026-01-03 02:02:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:44.784671 | orchestrator | 2026-01-03 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:47.832035 | orchestrator | 2026-01-03 02:02:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:47.833462 | orchestrator | 2026-01-03 02:02:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:47.833505 | orchestrator | 2026-01-03 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:50.876318 | orchestrator | 2026-01-03 02:02:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:50.878200 | orchestrator | 2026-01-03 02:02:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:50.878266 | orchestrator | 2026-01-03 02:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:53.919914 | orchestrator | 2026-01-03 02:02:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:53.921567 | orchestrator | 2026-01-03 02:02:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:53.921639 | orchestrator | 2026-01-03 02:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:56.968380 | orchestrator | 2026-01-03 02:02:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:02:56.969756 | orchestrator | 2026-01-03 02:02:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:02:56.969784 | orchestrator | 2026-01-03 02:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:00.019589 | orchestrator | 2026-01-03 02:03:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:00.022178 | orchestrator | 2026-01-03 02:03:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:00.022250 | orchestrator | 2026-01-03 02:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:03.060221 | orchestrator | 2026-01-03 02:03:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:03.060889 | orchestrator | 2026-01-03 02:03:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:03.061071 | orchestrator | 2026-01-03 02:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:06.110295 | orchestrator | 2026-01-03 02:03:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:06.112434 | orchestrator | 2026-01-03 02:03:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:06.112492 | orchestrator | 2026-01-03 02:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:09.157610 | orchestrator | 2026-01-03 02:03:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:09.159500 | orchestrator | 2026-01-03 02:03:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:09.159540 | orchestrator | 2026-01-03 02:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:12.203560 | orchestrator | 2026-01-03 02:03:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:12.204456 | orchestrator | 2026-01-03 02:03:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:12.204535 | orchestrator | 2026-01-03 02:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:15.248158 | orchestrator | 2026-01-03 02:03:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:15.249052 | orchestrator | 2026-01-03 02:03:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:15.249103 | orchestrator | 2026-01-03 02:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:18.291579 | orchestrator | 2026-01-03 02:03:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:18.293472 | orchestrator | 2026-01-03 02:03:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:18.293555 | orchestrator | 2026-01-03 02:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:21.337007 | orchestrator | 2026-01-03 02:03:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:21.338572 | orchestrator | 2026-01-03 02:03:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:21.338627 | orchestrator | 2026-01-03 02:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:24.385793 | orchestrator | 2026-01-03 02:03:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:24.387351 | orchestrator | 2026-01-03 02:03:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:24.387456 | orchestrator | 2026-01-03 02:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:27.434384 | orchestrator | 2026-01-03 02:03:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:27.435989 | orchestrator | 2026-01-03 02:03:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:27.436054 | orchestrator | 2026-01-03 02:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:30.481123 | orchestrator | 2026-01-03 02:03:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:30.481951 | orchestrator | 2026-01-03 02:03:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:30.482067 | orchestrator | 2026-01-03 02:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:33.529213 | orchestrator | 2026-01-03 02:03:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:33.531299 | orchestrator | 2026-01-03 02:03:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:33.531399 | orchestrator | 2026-01-03 02:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:36.577501 | orchestrator | 2026-01-03 02:03:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:36.579511 | orchestrator | 2026-01-03 02:03:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:36.579576 | orchestrator | 2026-01-03 02:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:39.621669 | orchestrator | 2026-01-03 02:03:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:39.624498 | orchestrator | 2026-01-03 02:03:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:39.624576 | orchestrator | 2026-01-03 02:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:42.669586 | orchestrator | 2026-01-03 02:03:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:42.670521 | orchestrator | 2026-01-03 02:03:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:42.670559 | orchestrator | 2026-01-03 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:45.716374 | orchestrator | 2026-01-03 02:03:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:45.717991 | orchestrator | 2026-01-03 02:03:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:45.718058 | orchestrator | 2026-01-03 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:48.758360 | orchestrator | 2026-01-03 02:03:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:48.760503 | orchestrator | 2026-01-03 02:03:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:48.760545 | orchestrator | 2026-01-03 02:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:51.806003 | orchestrator | 2026-01-03 02:03:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:51.807631 | orchestrator | 2026-01-03 02:03:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:51.807683 | orchestrator | 2026-01-03 02:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:54.855212 | orchestrator | 2026-01-03 02:03:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:54.856574 | orchestrator | 2026-01-03 02:03:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:54.856686 | orchestrator | 2026-01-03 02:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:57.903567 | orchestrator | 2026-01-03 02:03:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:03:57.904200 | orchestrator | 2026-01-03 02:03:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:03:57.904317 | orchestrator | 2026-01-03 02:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:00.947445 | orchestrator | 2026-01-03 02:04:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:00.949343 | orchestrator | 2026-01-03 02:04:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:00.949394 | orchestrator | 2026-01-03 02:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:03.997389 | orchestrator | 2026-01-03 02:04:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:03.999760 | orchestrator | 2026-01-03 02:04:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:03.999938 | orchestrator | 2026-01-03 02:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:07.040086 | orchestrator | 2026-01-03 02:04:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:07.041454 | orchestrator | 2026-01-03 02:04:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:07.041500 | orchestrator | 2026-01-03 02:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:10.097510 | orchestrator | 2026-01-03 02:04:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:10.099373 | orchestrator | 2026-01-03 02:04:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:10.099475 | orchestrator | 2026-01-03 02:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:13.138556 | orchestrator | 2026-01-03 02:04:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:13.140058 | orchestrator | 2026-01-03 02:04:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:13.140109 | orchestrator | 2026-01-03 02:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:16.195033 | orchestrator | 2026-01-03 02:04:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:16.196724 | orchestrator | 2026-01-03 02:04:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:16.196772 | orchestrator | 2026-01-03 02:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:19.243624 | orchestrator | 2026-01-03 02:04:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:19.245922 | orchestrator | 2026-01-03 02:04:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:19.245965 | orchestrator | 2026-01-03 02:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:22.288154 | orchestrator | 2026-01-03 02:04:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:22.289730 | orchestrator | 2026-01-03 02:04:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:22.289921 | orchestrator | 2026-01-03 02:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:25.338567 | orchestrator | 2026-01-03 02:04:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:25.339685 | orchestrator | 2026-01-03 02:04:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:25.339724 | orchestrator | 2026-01-03 02:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:28.386634 | orchestrator | 2026-01-03 02:04:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:28.389037 | orchestrator | 2026-01-03 02:04:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:28.389202 | orchestrator | 2026-01-03 02:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:31.430345 | orchestrator | 2026-01-03 02:04:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:31.432179 | orchestrator | 2026-01-03 02:04:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:31.432374 | orchestrator | 2026-01-03 02:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:34.477308 | orchestrator | 2026-01-03 02:04:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:34.479193 | orchestrator | 2026-01-03 02:04:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:34.479353 | orchestrator | 2026-01-03 02:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:37.517042 | orchestrator | 2026-01-03 02:04:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:37.518471 | orchestrator | 2026-01-03 02:04:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:37.518514 | orchestrator | 2026-01-03 02:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:40.562338 | orchestrator | 2026-01-03 02:04:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:40.564699 | orchestrator | 2026-01-03 02:04:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:40.564965 | orchestrator | 2026-01-03 02:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:43.610234 | orchestrator | 2026-01-03 02:04:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:43.611752 | orchestrator | 2026-01-03 02:04:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:43.611825 | orchestrator | 2026-01-03 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:46.651214 | orchestrator | 2026-01-03 02:04:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:46.653696 | orchestrator | 2026-01-03 02:04:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:46.653758 | orchestrator | 2026-01-03 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:49.697730 | orchestrator | 2026-01-03 02:04:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:49.700813 | orchestrator | 2026-01-03 02:04:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:49.700952 | orchestrator | 2026-01-03 02:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:52.751150 | orchestrator | 2026-01-03 02:04:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:52.753922 | orchestrator | 2026-01-03 02:04:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:52.753993 | orchestrator | 2026-01-03 02:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:55.805550 | orchestrator | 2026-01-03 02:04:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:55.806236 | orchestrator | 2026-01-03 02:04:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:55.806268 | orchestrator | 2026-01-03 02:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:58.850562 | orchestrator | 2026-01-03 02:04:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:04:58.851639 | orchestrator | 2026-01-03 02:04:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:04:58.851767 | orchestrator | 2026-01-03 02:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:01.898376 | orchestrator | 2026-01-03 02:05:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:01.900997 | orchestrator | 2026-01-03 02:05:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:01.901075 | orchestrator | 2026-01-03 02:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:04.955694 | orchestrator | 2026-01-03 02:05:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:04.958661 | orchestrator | 2026-01-03 02:05:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:04.958725 | orchestrator | 2026-01-03 02:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:08.006129 | orchestrator | 2026-01-03 02:05:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:08.006888 | orchestrator | 2026-01-03 02:05:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:08.006981 | orchestrator | 2026-01-03 02:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:11.053527 | orchestrator | 2026-01-03 02:05:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:11.054410 | orchestrator | 2026-01-03 02:05:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:11.054496 | orchestrator | 2026-01-03 02:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:14.096010 | orchestrator | 2026-01-03 02:05:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:14.097884 | orchestrator | 2026-01-03 02:05:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:14.097950 | orchestrator | 2026-01-03 02:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:17.147055 | orchestrator | 2026-01-03 02:05:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:17.149531 | orchestrator | 2026-01-03 02:05:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:17.149707 | orchestrator | 2026-01-03 02:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:20.197222 | orchestrator | 2026-01-03 02:05:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:20.199188 | orchestrator | 2026-01-03 02:05:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:20.199279 | orchestrator | 2026-01-03 02:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:23.246789 | orchestrator | 2026-01-03 02:05:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:23.248610 | orchestrator | 2026-01-03 02:05:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:23.248656 | orchestrator | 2026-01-03 02:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:26.293989 | orchestrator | 2026-01-03 02:05:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:26.295558 | orchestrator | 2026-01-03 02:05:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:26.295610 | orchestrator | 2026-01-03 02:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:29.336880 | orchestrator | 2026-01-03 02:05:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:29.338684 | orchestrator | 2026-01-03 02:05:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:29.338749 | orchestrator | 2026-01-03 02:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:32.388572 | orchestrator | 2026-01-03 02:05:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:32.390499 | orchestrator | 2026-01-03 02:05:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:32.390628 | orchestrator | 2026-01-03 02:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:35.438687 | orchestrator | 2026-01-03 02:05:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:35.440970 | orchestrator | 2026-01-03 02:05:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:35.441040 | orchestrator | 2026-01-03 02:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:38.488808 | orchestrator | 2026-01-03 02:05:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:38.491196 | orchestrator | 2026-01-03 02:05:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:38.491277 | orchestrator | 2026-01-03 02:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:41.540455 | orchestrator | 2026-01-03 02:05:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:41.542586 | orchestrator | 2026-01-03 02:05:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:41.542676 | orchestrator | 2026-01-03 02:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:44.587164 | orchestrator | 2026-01-03 02:05:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:44.589738 | orchestrator | 2026-01-03 02:05:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:44.589849 | orchestrator | 2026-01-03 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:47.635343 | orchestrator | 2026-01-03 02:05:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:47.636248 | orchestrator | 2026-01-03 02:05:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:47.636413 | orchestrator | 2026-01-03 02:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:50.685806 | orchestrator | 2026-01-03 02:05:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:50.686299 | orchestrator | 2026-01-03 02:05:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:50.686398 | orchestrator | 2026-01-03 02:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:53.733911 | orchestrator | 2026-01-03 02:05:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:53.735307 | orchestrator | 2026-01-03 02:05:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:53.735339 | orchestrator | 2026-01-03 02:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:56.782782 | orchestrator | 2026-01-03 02:05:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:56.784502 | orchestrator | 2026-01-03 02:05:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:56.784564 | orchestrator | 2026-01-03 02:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:59.826599 | orchestrator | 2026-01-03 02:05:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:05:59.827845 | orchestrator | 2026-01-03 02:05:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:05:59.827891 | orchestrator | 2026-01-03 02:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:02.870814 | orchestrator | 2026-01-03 02:06:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:02.872392 | orchestrator | 2026-01-03 02:06:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:02.872464 | orchestrator | 2026-01-03 02:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:05.917126 | orchestrator | 2026-01-03 02:06:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:05.918744 | orchestrator | 2026-01-03 02:06:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:05.918856 | orchestrator | 2026-01-03 02:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:08.966307 | orchestrator | 2026-01-03 02:06:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:08.968919 | orchestrator | 2026-01-03 02:06:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:08.969326 | orchestrator | 2026-01-03 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:12.012127 | orchestrator | 2026-01-03 02:06:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:12.013836 | orchestrator | 2026-01-03 02:06:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:12.013899 | orchestrator | 2026-01-03 02:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:15.063299 | orchestrator | 2026-01-03 02:06:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:15.065788 | orchestrator | 2026-01-03 02:06:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:15.065845 | orchestrator | 2026-01-03 02:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:18.107804 | orchestrator | 2026-01-03 02:06:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:18.109770 | orchestrator | 2026-01-03 02:06:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:18.109826 | orchestrator | 2026-01-03 02:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:21.156440 | orchestrator | 2026-01-03 02:06:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:21.157919 | orchestrator | 2026-01-03 02:06:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:21.158096 | orchestrator | 2026-01-03 02:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:24.205864 | orchestrator | 2026-01-03 02:06:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:24.207484 | orchestrator | 2026-01-03 02:06:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:24.207530 | orchestrator | 2026-01-03 02:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:27.251606 | orchestrator | 2026-01-03 02:06:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:27.253682 | orchestrator | 2026-01-03 02:06:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:27.253766 | orchestrator | 2026-01-03 02:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:30.302659 | orchestrator | 2026-01-03 02:06:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:30.304047 | orchestrator | 2026-01-03 02:06:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:30.304095 | orchestrator | 2026-01-03 02:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:33.349611 | orchestrator | 2026-01-03 02:06:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:33.351247 | orchestrator | 2026-01-03 02:06:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:33.351305 | orchestrator | 2026-01-03 02:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:36.397588 | orchestrator | 2026-01-03 02:06:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:36.399519 | orchestrator | 2026-01-03 02:06:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:36.399658 | orchestrator | 2026-01-03 02:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:39.446316 | orchestrator | 2026-01-03 02:06:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:39.448766 | orchestrator | 2026-01-03 02:06:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:39.448875 | orchestrator | 2026-01-03 02:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:42.489347 | orchestrator | 2026-01-03 02:06:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:42.491686 | orchestrator | 2026-01-03 02:06:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:42.491960 | orchestrator | 2026-01-03 02:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:45.538227 | orchestrator | 2026-01-03 02:06:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:45.539688 | orchestrator | 2026-01-03 02:06:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:45.539793 | orchestrator | 2026-01-03 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:48.586373 | orchestrator | 2026-01-03 02:06:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:48.587678 | orchestrator | 2026-01-03 02:06:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:48.587829 | orchestrator | 2026-01-03 02:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:51.634319 | orchestrator | 2026-01-03 02:06:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:51.636641 | orchestrator | 2026-01-03 02:06:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:51.636695 | orchestrator | 2026-01-03 02:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:54.684468 | orchestrator | 2026-01-03 02:06:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:54.686144 | orchestrator | 2026-01-03 02:06:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:54.686243 | orchestrator | 2026-01-03 02:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:57.733158 | orchestrator | 2026-01-03 02:06:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:06:57.735563 | orchestrator | 2026-01-03 02:06:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:06:57.735636 | orchestrator | 2026-01-03 02:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:00.779307 | orchestrator | 2026-01-03 02:07:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:00.780179 | orchestrator | 2026-01-03 02:07:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:00.780308 | orchestrator | 2026-01-03 02:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:03.824709 | orchestrator | 2026-01-03 02:07:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:03.826739 | orchestrator | 2026-01-03 02:07:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:03.826811 | orchestrator | 2026-01-03 02:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:06.871457 | orchestrator | 2026-01-03 02:07:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:06.872465 | orchestrator | 2026-01-03 02:07:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:06.872531 | orchestrator | 2026-01-03 02:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:09.915759 | orchestrator | 2026-01-03 02:07:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:09.918092 | orchestrator | 2026-01-03 02:07:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:09.918180 | orchestrator | 2026-01-03 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:12.964840 | orchestrator | 2026-01-03 02:07:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:12.967201 | orchestrator | 2026-01-03 02:07:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:12.967373 | orchestrator | 2026-01-03 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:16.015864 | orchestrator | 2026-01-03 02:07:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:16.017316 | orchestrator | 2026-01-03 02:07:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:16.017357 | orchestrator | 2026-01-03 02:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:19.062076 | orchestrator | 2026-01-03 02:07:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:19.063995 | orchestrator | 2026-01-03 02:07:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:19.064072 | orchestrator | 2026-01-03 02:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:22.113297 | orchestrator | 2026-01-03 02:07:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:22.115936 | orchestrator | 2026-01-03 02:07:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:22.116020 | orchestrator | 2026-01-03 02:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:25.158850 | orchestrator | 2026-01-03 02:07:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:25.160726 | orchestrator | 2026-01-03 02:07:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:25.160923 | orchestrator | 2026-01-03 02:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:28.203129 | orchestrator | 2026-01-03 02:07:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:28.206383 | orchestrator | 2026-01-03 02:07:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:28.206449 | orchestrator | 2026-01-03 02:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:31.259297 | orchestrator | 2026-01-03 02:07:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:31.260370 | orchestrator | 2026-01-03 02:07:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:31.260439 | orchestrator | 2026-01-03 02:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:34.302730 | orchestrator | 2026-01-03 02:07:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:34.305635 | orchestrator | 2026-01-03 02:07:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:34.305748 | orchestrator | 2026-01-03 02:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:37.357343 | orchestrator | 2026-01-03 02:07:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:37.359643 | orchestrator | 2026-01-03 02:07:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:37.359755 | orchestrator | 2026-01-03 02:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:40.407291 | orchestrator | 2026-01-03 02:07:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:40.409837 | orchestrator | 2026-01-03 02:07:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:40.410099 | orchestrator | 2026-01-03 02:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:43.461678 | orchestrator | 2026-01-03 02:07:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:43.464439 | orchestrator | 2026-01-03 02:07:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:43.464497 | orchestrator | 2026-01-03 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:46.514313 | orchestrator | 2026-01-03 02:07:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:46.516390 | orchestrator | 2026-01-03 02:07:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:46.516452 | orchestrator | 2026-01-03 02:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:49.563472 | orchestrator | 2026-01-03 02:07:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:49.566622 | orchestrator | 2026-01-03 02:07:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:49.566704 | orchestrator | 2026-01-03 02:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:52.613481 | orchestrator | 2026-01-03 02:07:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:52.616782 | orchestrator | 2026-01-03 02:07:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:52.616876 | orchestrator | 2026-01-03 02:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:55.661344 | orchestrator | 2026-01-03 02:07:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:55.662777 | orchestrator | 2026-01-03 02:07:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:55.662888 | orchestrator | 2026-01-03 02:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:58.709446 | orchestrator | 2026-01-03 02:07:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:07:58.711431 | orchestrator | 2026-01-03 02:07:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:07:58.711480 | orchestrator | 2026-01-03 02:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:01.759238 | orchestrator | 2026-01-03 02:08:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:01.760947 | orchestrator | 2026-01-03 02:08:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:01.760994 | orchestrator | 2026-01-03 02:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:04.804686 | orchestrator | 2026-01-03 02:08:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:04.805606 | orchestrator | 2026-01-03 02:08:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:04.806136 | orchestrator | 2026-01-03 02:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:07.852139 | orchestrator | 2026-01-03 02:08:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:07.853647 | orchestrator | 2026-01-03 02:08:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:07.853794 | orchestrator | 2026-01-03 02:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:10.897708 | orchestrator | 2026-01-03 02:08:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:10.899394 | orchestrator | 2026-01-03 02:08:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:10.899910 | orchestrator | 2026-01-03 02:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:13.943386 | orchestrator | 2026-01-03 02:08:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:13.946389 | orchestrator | 2026-01-03 02:08:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:13.946508 | orchestrator | 2026-01-03 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:16.995008 | orchestrator | 2026-01-03 02:08:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:16.996229 | orchestrator | 2026-01-03 02:08:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:16.996279 | orchestrator | 2026-01-03 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:20.047430 | orchestrator | 2026-01-03 02:08:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:20.049213 | orchestrator | 2026-01-03 02:08:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:20.049313 | orchestrator | 2026-01-03 02:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:23.091626 | orchestrator | 2026-01-03 02:08:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:23.092413 | orchestrator | 2026-01-03 02:08:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:23.092457 | orchestrator | 2026-01-03 02:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:26.137186 | orchestrator | 2026-01-03 02:08:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:26.137827 | orchestrator | 2026-01-03 02:08:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:26.138313 | orchestrator | 2026-01-03 02:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:29.182273 | orchestrator | 2026-01-03 02:08:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:29.183198 | orchestrator | 2026-01-03 02:08:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:29.183275 | orchestrator | 2026-01-03 02:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:32.233628 | orchestrator | 2026-01-03 02:08:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:32.234785 | orchestrator | 2026-01-03 02:08:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:32.235012 | orchestrator | 2026-01-03 02:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:35.267325 | orchestrator | 2026-01-03 02:08:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:35.268275 | orchestrator | 2026-01-03 02:08:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:35.268355 | orchestrator | 2026-01-03 02:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:38.313076 | orchestrator | 2026-01-03 02:08:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:38.314372 | orchestrator | 2026-01-03 02:08:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:38.314490 | orchestrator | 2026-01-03 02:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:41.361652 | orchestrator | 2026-01-03 02:08:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:41.363619 | orchestrator | 2026-01-03 02:08:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:41.363688 | orchestrator | 2026-01-03 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:44.410684 | orchestrator | 2026-01-03 02:08:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:44.413324 | orchestrator | 2026-01-03 02:08:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:44.413377 | orchestrator | 2026-01-03 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:47.458599 | orchestrator | 2026-01-03 02:08:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:47.460105 | orchestrator | 2026-01-03 02:08:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:47.460166 | orchestrator | 2026-01-03 02:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:50.502945 | orchestrator | 2026-01-03 02:08:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:50.505016 | orchestrator | 2026-01-03 02:08:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:50.505071 | orchestrator | 2026-01-03 02:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:53.544211 | orchestrator | 2026-01-03 02:08:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:53.545673 | orchestrator | 2026-01-03 02:08:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:53.545750 | orchestrator | 2026-01-03 02:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:56.594651 | orchestrator | 2026-01-03 02:08:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:56.598175 | orchestrator | 2026-01-03 02:08:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:56.598241 | orchestrator | 2026-01-03 02:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:59.643836 | orchestrator | 2026-01-03 02:08:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:08:59.645865 | orchestrator | 2026-01-03 02:08:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:08:59.645909 | orchestrator | 2026-01-03 02:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:02.702868 | orchestrator | 2026-01-03 02:09:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:02.704557 | orchestrator | 2026-01-03 02:09:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:02.704721 | orchestrator | 2026-01-03 02:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:05.756287 | orchestrator | 2026-01-03 02:09:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:05.757524 | orchestrator | 2026-01-03 02:09:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:05.757570 | orchestrator | 2026-01-03 02:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:08.803798 | orchestrator | 2026-01-03 02:09:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:08.805820 | orchestrator | 2026-01-03 02:09:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:08.805900 | orchestrator | 2026-01-03 02:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:11.850318 | orchestrator | 2026-01-03 02:09:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:11.854174 | orchestrator | 2026-01-03 02:09:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:11.854270 | orchestrator | 2026-01-03 02:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:14.902914 | orchestrator | 2026-01-03 02:09:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:14.904357 | orchestrator | 2026-01-03 02:09:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:14.904412 | orchestrator | 2026-01-03 02:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:17.954661 | orchestrator | 2026-01-03 02:09:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:17.956453 | orchestrator | 2026-01-03 02:09:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:17.956502 | orchestrator | 2026-01-03 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:21.005659 | orchestrator | 2026-01-03 02:09:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:21.008637 | orchestrator | 2026-01-03 02:09:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:21.008710 | orchestrator | 2026-01-03 02:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:24.059445 | orchestrator | 2026-01-03 02:09:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:24.060562 | orchestrator | 2026-01-03 02:09:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:24.060600 | orchestrator | 2026-01-03 02:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:27.109136 | orchestrator | 2026-01-03 02:09:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:27.111079 | orchestrator | 2026-01-03 02:09:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:27.111204 | orchestrator | 2026-01-03 02:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:30.158881 | orchestrator | 2026-01-03 02:09:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:30.160821 | orchestrator | 2026-01-03 02:09:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:30.160880 | orchestrator | 2026-01-03 02:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:33.207353 | orchestrator | 2026-01-03 02:09:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:33.208640 | orchestrator | 2026-01-03 02:09:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:33.208708 | orchestrator | 2026-01-03 02:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:36.252959 | orchestrator | 2026-01-03 02:09:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:36.255434 | orchestrator | 2026-01-03 02:09:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:36.255516 | orchestrator | 2026-01-03 02:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:39.301228 | orchestrator | 2026-01-03 02:09:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:39.303091 | orchestrator | 2026-01-03 02:09:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:39.303211 | orchestrator | 2026-01-03 02:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:42.348935 | orchestrator | 2026-01-03 02:09:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:42.350428 | orchestrator | 2026-01-03 02:09:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:42.350621 | orchestrator | 2026-01-03 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:45.389722 | orchestrator | 2026-01-03 02:09:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:45.392235 | orchestrator | 2026-01-03 02:09:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:45.392283 | orchestrator | 2026-01-03 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:48.439573 | orchestrator | 2026-01-03 02:09:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:48.441557 | orchestrator | 2026-01-03 02:09:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:48.441612 | orchestrator | 2026-01-03 02:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:51.480875 | orchestrator | 2026-01-03 02:09:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:51.482407 | orchestrator | 2026-01-03 02:09:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:51.482458 | orchestrator | 2026-01-03 02:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:54.531219 | orchestrator | 2026-01-03 02:09:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:54.532537 | orchestrator | 2026-01-03 02:09:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:54.532628 | orchestrator | 2026-01-03 02:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:57.577515 | orchestrator | 2026-01-03 02:09:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:09:57.580327 | orchestrator | 2026-01-03 02:09:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:09:57.580460 | orchestrator | 2026-01-03 02:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:00.623746 | orchestrator | 2026-01-03 02:10:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:00.625445 | orchestrator | 2026-01-03 02:10:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:00.625500 | orchestrator | 2026-01-03 02:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:03.675590 | orchestrator | 2026-01-03 02:10:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:03.677346 | orchestrator | 2026-01-03 02:10:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:03.677400 | orchestrator | 2026-01-03 02:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:06.723421 | orchestrator | 2026-01-03 02:10:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:06.725330 | orchestrator | 2026-01-03 02:10:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:06.725433 | orchestrator | 2026-01-03 02:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:09.764999 | orchestrator | 2026-01-03 02:10:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:09.765634 | orchestrator | 2026-01-03 02:10:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:09.765662 | orchestrator | 2026-01-03 02:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:12.811634 | orchestrator | 2026-01-03 02:10:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:12.813528 | orchestrator | 2026-01-03 02:10:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:12.813634 | orchestrator | 2026-01-03 02:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:15.861597 | orchestrator | 2026-01-03 02:10:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:15.863583 | orchestrator | 2026-01-03 02:10:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:15.863666 | orchestrator | 2026-01-03 02:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:18.910325 | orchestrator | 2026-01-03 02:10:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:18.912509 | orchestrator | 2026-01-03 02:10:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:18.912543 | orchestrator | 2026-01-03 02:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:21.957275 | orchestrator | 2026-01-03 02:10:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:21.959461 | orchestrator | 2026-01-03 02:10:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:21.959525 | orchestrator | 2026-01-03 02:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:25.002983 | orchestrator | 2026-01-03 02:10:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:25.005068 | orchestrator | 2026-01-03 02:10:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:25.005260 | orchestrator | 2026-01-03 02:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:28.048769 | orchestrator | 2026-01-03 02:10:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:28.050284 | orchestrator | 2026-01-03 02:10:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:28.050346 | orchestrator | 2026-01-03 02:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:31.095819 | orchestrator | 2026-01-03 02:10:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:31.097720 | orchestrator | 2026-01-03 02:10:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:31.097778 | orchestrator | 2026-01-03 02:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:34.139463 | orchestrator | 2026-01-03 02:10:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:34.141688 | orchestrator | 2026-01-03 02:10:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:34.141734 | orchestrator | 2026-01-03 02:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:37.194000 | orchestrator | 2026-01-03 02:10:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:37.195839 | orchestrator | 2026-01-03 02:10:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:37.195928 | orchestrator | 2026-01-03 02:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:40.243138 | orchestrator | 2026-01-03 02:10:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:40.243761 | orchestrator | 2026-01-03 02:10:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:40.243780 | orchestrator | 2026-01-03 02:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:43.288432 | orchestrator | 2026-01-03 02:10:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:43.289117 | orchestrator | 2026-01-03 02:10:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:43.289404 | orchestrator | 2026-01-03 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:46.332446 | orchestrator | 2026-01-03 02:10:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:46.333258 | orchestrator | 2026-01-03 02:10:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:46.333451 | orchestrator | 2026-01-03 02:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:49.376555 | orchestrator | 2026-01-03 02:10:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:49.379797 | orchestrator | 2026-01-03 02:10:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:49.379897 | orchestrator | 2026-01-03 02:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:52.427110 | orchestrator | 2026-01-03 02:10:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:52.429764 | orchestrator | 2026-01-03 02:10:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:52.429825 | orchestrator | 2026-01-03 02:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:55.478131 | orchestrator | 2026-01-03 02:10:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:55.479223 | orchestrator | 2026-01-03 02:10:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:55.479280 | orchestrator | 2026-01-03 02:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:58.523752 | orchestrator | 2026-01-03 02:10:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:10:58.525735 | orchestrator | 2026-01-03 02:10:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:10:58.525923 | orchestrator | 2026-01-03 02:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:01.567634 | orchestrator | 2026-01-03 02:11:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:01.568965 | orchestrator | 2026-01-03 02:11:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:01.569228 | orchestrator | 2026-01-03 02:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:04.611063 | orchestrator | 2026-01-03 02:11:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:04.612612 | orchestrator | 2026-01-03 02:11:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:04.612673 | orchestrator | 2026-01-03 02:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:07.658339 | orchestrator | 2026-01-03 02:11:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:07.659915 | orchestrator | 2026-01-03 02:11:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:07.660006 | orchestrator | 2026-01-03 02:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:10.702415 | orchestrator | 2026-01-03 02:11:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:10.704122 | orchestrator | 2026-01-03 02:11:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:10.704271 | orchestrator | 2026-01-03 02:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:13.750361 | orchestrator | 2026-01-03 02:11:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:13.752960 | orchestrator | 2026-01-03 02:11:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:13.753103 | orchestrator | 2026-01-03 02:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:16.799154 | orchestrator | 2026-01-03 02:11:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:16.800590 | orchestrator | 2026-01-03 02:11:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:16.800646 | orchestrator | 2026-01-03 02:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:19.844844 | orchestrator | 2026-01-03 02:11:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:19.847546 | orchestrator | 2026-01-03 02:11:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:19.847589 | orchestrator | 2026-01-03 02:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:22.891748 | orchestrator | 2026-01-03 02:11:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:22.892823 | orchestrator | 2026-01-03 02:11:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:22.892901 | orchestrator | 2026-01-03 02:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:25.941904 | orchestrator | 2026-01-03 02:11:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:25.943551 | orchestrator | 2026-01-03 02:11:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:25.943651 | orchestrator | 2026-01-03 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:28.988303 | orchestrator | 2026-01-03 02:11:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:28.990283 | orchestrator | 2026-01-03 02:11:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:28.990338 | orchestrator | 2026-01-03 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:32.030284 | orchestrator | 2026-01-03 02:11:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:32.032869 | orchestrator | 2026-01-03 02:11:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:32.032922 | orchestrator | 2026-01-03 02:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:35.080011 | orchestrator | 2026-01-03 02:11:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:35.081939 | orchestrator | 2026-01-03 02:11:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:35.081988 | orchestrator | 2026-01-03 02:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:38.129688 | orchestrator | 2026-01-03 02:11:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:38.131155 | orchestrator | 2026-01-03 02:11:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:38.131285 | orchestrator | 2026-01-03 02:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:41.177974 | orchestrator | 2026-01-03 02:11:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:41.179930 | orchestrator | 2026-01-03 02:11:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:41.180075 | orchestrator | 2026-01-03 02:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:44.222774 | orchestrator | 2026-01-03 02:11:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:44.223905 | orchestrator | 2026-01-03 02:11:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:44.224151 | orchestrator | 2026-01-03 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:47.275812 | orchestrator | 2026-01-03 02:11:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:47.278652 | orchestrator | 2026-01-03 02:11:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:47.278709 | orchestrator | 2026-01-03 02:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:50.327989 | orchestrator | 2026-01-03 02:11:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:50.329610 | orchestrator | 2026-01-03 02:11:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:50.329642 | orchestrator | 2026-01-03 02:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:53.377204 | orchestrator | 2026-01-03 02:11:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:53.379648 | orchestrator | 2026-01-03 02:11:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:53.379701 | orchestrator | 2026-01-03 02:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:56.429675 | orchestrator | 2026-01-03 02:11:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:56.431091 | orchestrator | 2026-01-03 02:11:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:56.431137 | orchestrator | 2026-01-03 02:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:59.478372 | orchestrator | 2026-01-03 02:11:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:11:59.479154 | orchestrator | 2026-01-03 02:11:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:11:59.479308 | orchestrator | 2026-01-03 02:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:02.523465 | orchestrator | 2026-01-03 02:12:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:02.525159 | orchestrator | 2026-01-03 02:12:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:02.525371 | orchestrator | 2026-01-03 02:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:05.570158 | orchestrator | 2026-01-03 02:12:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:05.572644 | orchestrator | 2026-01-03 02:12:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:05.572711 | orchestrator | 2026-01-03 02:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:08.614473 | orchestrator | 2026-01-03 02:12:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:08.614911 | orchestrator | 2026-01-03 02:12:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:08.614987 | orchestrator | 2026-01-03 02:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:11.656510 | orchestrator | 2026-01-03 02:12:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:11.658145 | orchestrator | 2026-01-03 02:12:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:11.658446 | orchestrator | 2026-01-03 02:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:14.704795 | orchestrator | 2026-01-03 02:12:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:14.706466 | orchestrator | 2026-01-03 02:12:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:14.706624 | orchestrator | 2026-01-03 02:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:17.754004 | orchestrator | 2026-01-03 02:12:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:17.754638 | orchestrator | 2026-01-03 02:12:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:17.755006 | orchestrator | 2026-01-03 02:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:20.805102 | orchestrator | 2026-01-03 02:12:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:20.806456 | orchestrator | 2026-01-03 02:12:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:20.806491 | orchestrator | 2026-01-03 02:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:23.856496 | orchestrator | 2026-01-03 02:12:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:23.857704 | orchestrator | 2026-01-03 02:12:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:23.857760 | orchestrator | 2026-01-03 02:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:26.903330 | orchestrator | 2026-01-03 02:12:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:26.906377 | orchestrator | 2026-01-03 02:12:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:26.906487 | orchestrator | 2026-01-03 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:29.949885 | orchestrator | 2026-01-03 02:12:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:29.951091 | orchestrator | 2026-01-03 02:12:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:29.951140 | orchestrator | 2026-01-03 02:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:32.995456 | orchestrator | 2026-01-03 02:12:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:32.997602 | orchestrator | 2026-01-03 02:12:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:32.997673 | orchestrator | 2026-01-03 02:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:36.041023 | orchestrator | 2026-01-03 02:12:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:36.043115 | orchestrator | 2026-01-03 02:12:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:36.043180 | orchestrator | 2026-01-03 02:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:39.086303 | orchestrator | 2026-01-03 02:12:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:39.088385 | orchestrator | 2026-01-03 02:12:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:39.088439 | orchestrator | 2026-01-03 02:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:42.135933 | orchestrator | 2026-01-03 02:12:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:42.137552 | orchestrator | 2026-01-03 02:12:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:42.137603 | orchestrator | 2026-01-03 02:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:45.181112 | orchestrator | 2026-01-03 02:12:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:45.182799 | orchestrator | 2026-01-03 02:12:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:45.182858 | orchestrator | 2026-01-03 02:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:48.228920 | orchestrator | 2026-01-03 02:12:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:48.231294 | orchestrator | 2026-01-03 02:12:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:48.231342 | orchestrator | 2026-01-03 02:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:51.272142 | orchestrator | 2026-01-03 02:12:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:51.274447 | orchestrator | 2026-01-03 02:12:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:51.274722 | orchestrator | 2026-01-03 02:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:54.323689 | orchestrator | 2026-01-03 02:12:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:54.325875 | orchestrator | 2026-01-03 02:12:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:54.325956 | orchestrator | 2026-01-03 02:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:57.368773 | orchestrator | 2026-01-03 02:12:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:12:57.370541 | orchestrator | 2026-01-03 02:12:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:12:57.370637 | orchestrator | 2026-01-03 02:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:00.409585 | orchestrator | 2026-01-03 02:13:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:00.411334 | orchestrator | 2026-01-03 02:13:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:00.411473 | orchestrator | 2026-01-03 02:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:03.454686 | orchestrator | 2026-01-03 02:13:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:03.456246 | orchestrator | 2026-01-03 02:13:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:03.456332 | orchestrator | 2026-01-03 02:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:06.493710 | orchestrator | 2026-01-03 02:13:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:06.496233 | orchestrator | 2026-01-03 02:13:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:06.496337 | orchestrator | 2026-01-03 02:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:09.544701 | orchestrator | 2026-01-03 02:13:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:09.546858 | orchestrator | 2026-01-03 02:13:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:09.546989 | orchestrator | 2026-01-03 02:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:12.593279 | orchestrator | 2026-01-03 02:13:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:12.594672 | orchestrator | 2026-01-03 02:13:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:12.594724 | orchestrator | 2026-01-03 02:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:15.638556 | orchestrator | 2026-01-03 02:13:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:15.640711 | orchestrator | 2026-01-03 02:13:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:15.640770 | orchestrator | 2026-01-03 02:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:18.685455 | orchestrator | 2026-01-03 02:13:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:18.688207 | orchestrator | 2026-01-03 02:13:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:18.688423 | orchestrator | 2026-01-03 02:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:21.734962 | orchestrator | 2026-01-03 02:13:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:21.736728 | orchestrator | 2026-01-03 02:13:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:21.736784 | orchestrator | 2026-01-03 02:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:24.782590 | orchestrator | 2026-01-03 02:13:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:24.783879 | orchestrator | 2026-01-03 02:13:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:24.784028 | orchestrator | 2026-01-03 02:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:27.830423 | orchestrator | 2026-01-03 02:13:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:27.832505 | orchestrator | 2026-01-03 02:13:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:27.832648 | orchestrator | 2026-01-03 02:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:30.878335 | orchestrator | 2026-01-03 02:13:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:30.880125 | orchestrator | 2026-01-03 02:13:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:30.880180 | orchestrator | 2026-01-03 02:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:33.929164 | orchestrator | 2026-01-03 02:13:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:33.931681 | orchestrator | 2026-01-03 02:13:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:33.932307 | orchestrator | 2026-01-03 02:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:36.973022 | orchestrator | 2026-01-03 02:13:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:36.974168 | orchestrator | 2026-01-03 02:13:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:36.974212 | orchestrator | 2026-01-03 02:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:40.021403 | orchestrator | 2026-01-03 02:13:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:40.023826 | orchestrator | 2026-01-03 02:13:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:40.023886 | orchestrator | 2026-01-03 02:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:43.068805 | orchestrator | 2026-01-03 02:13:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:43.069475 | orchestrator | 2026-01-03 02:13:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:43.069507 | orchestrator | 2026-01-03 02:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:46.114797 | orchestrator | 2026-01-03 02:13:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:46.116206 | orchestrator | 2026-01-03 02:13:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:46.116261 | orchestrator | 2026-01-03 02:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:49.158534 | orchestrator | 2026-01-03 02:13:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:49.159546 | orchestrator | 2026-01-03 02:13:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:49.159691 | orchestrator | 2026-01-03 02:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:52.203374 | orchestrator | 2026-01-03 02:13:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:52.204787 | orchestrator | 2026-01-03 02:13:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:52.204848 | orchestrator | 2026-01-03 02:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:55.249881 | orchestrator | 2026-01-03 02:13:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:55.250163 | orchestrator | 2026-01-03 02:13:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:55.250177 | orchestrator | 2026-01-03 02:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:58.295333 | orchestrator | 2026-01-03 02:13:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:13:58.297614 | orchestrator | 2026-01-03 02:13:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:13:58.297687 | orchestrator | 2026-01-03 02:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:01.339844 | orchestrator | 2026-01-03 02:14:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:01.340994 | orchestrator | 2026-01-03 02:14:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:01.341182 | orchestrator | 2026-01-03 02:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:04.389809 | orchestrator | 2026-01-03 02:14:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:04.391674 | orchestrator | 2026-01-03 02:14:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:04.391735 | orchestrator | 2026-01-03 02:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:07.433113 | orchestrator | 2026-01-03 02:14:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:07.433591 | orchestrator | 2026-01-03 02:14:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:07.433638 | orchestrator | 2026-01-03 02:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:10.477706 | orchestrator | 2026-01-03 02:14:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:10.479695 | orchestrator | 2026-01-03 02:14:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:10.479786 | orchestrator | 2026-01-03 02:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:13.526991 | orchestrator | 2026-01-03 02:14:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:13.527847 | orchestrator | 2026-01-03 02:14:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:13.527882 | orchestrator | 2026-01-03 02:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:16.575446 | orchestrator | 2026-01-03 02:14:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:16.577396 | orchestrator | 2026-01-03 02:14:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:16.577467 | orchestrator | 2026-01-03 02:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:19.619381 | orchestrator | 2026-01-03 02:14:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:19.621158 | orchestrator | 2026-01-03 02:14:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:19.621235 | orchestrator | 2026-01-03 02:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:22.664411 | orchestrator | 2026-01-03 02:14:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:22.665966 | orchestrator | 2026-01-03 02:14:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:22.666055 | orchestrator | 2026-01-03 02:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:25.714869 | orchestrator | 2026-01-03 02:14:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:25.716752 | orchestrator | 2026-01-03 02:14:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:25.716941 | orchestrator | 2026-01-03 02:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:28.764780 | orchestrator | 2026-01-03 02:14:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:28.766843 | orchestrator | 2026-01-03 02:14:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:28.766959 | orchestrator | 2026-01-03 02:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:31.815558 | orchestrator | 2026-01-03 02:14:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:31.817215 | orchestrator | 2026-01-03 02:14:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:31.817273 | orchestrator | 2026-01-03 02:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:34.859555 | orchestrator | 2026-01-03 02:14:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:34.860805 | orchestrator | 2026-01-03 02:14:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:34.860858 | orchestrator | 2026-01-03 02:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:37.908140 | orchestrator | 2026-01-03 02:14:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:37.909644 | orchestrator | 2026-01-03 02:14:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:37.909752 | orchestrator | 2026-01-03 02:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:40.957592 | orchestrator | 2026-01-03 02:14:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:40.959120 | orchestrator | 2026-01-03 02:14:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:40.959170 | orchestrator | 2026-01-03 02:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:44.003826 | orchestrator | 2026-01-03 02:14:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:44.005728 | orchestrator | 2026-01-03 02:14:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:44.005794 | orchestrator | 2026-01-03 02:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:47.049409 | orchestrator | 2026-01-03 02:14:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:47.051143 | orchestrator | 2026-01-03 02:14:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:47.051234 | orchestrator | 2026-01-03 02:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:50.096720 | orchestrator | 2026-01-03 02:14:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:50.098606 | orchestrator | 2026-01-03 02:14:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:50.098697 | orchestrator | 2026-01-03 02:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:53.150189 | orchestrator | 2026-01-03 02:14:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:53.152567 | orchestrator | 2026-01-03 02:14:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:53.152628 | orchestrator | 2026-01-03 02:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:56.201996 | orchestrator | 2026-01-03 02:14:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:56.202744 | orchestrator | 2026-01-03 02:14:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:56.202788 | orchestrator | 2026-01-03 02:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:59.246352 | orchestrator | 2026-01-03 02:14:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:14:59.247356 | orchestrator | 2026-01-03 02:14:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:14:59.247454 | orchestrator | 2026-01-03 02:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:02.286948 | orchestrator | 2026-01-03 02:15:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:02.289168 | orchestrator | 2026-01-03 02:15:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:02.289395 | orchestrator | 2026-01-03 02:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:05.333667 | orchestrator | 2026-01-03 02:15:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:05.334865 | orchestrator | 2026-01-03 02:15:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:05.334916 | orchestrator | 2026-01-03 02:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:08.379055 | orchestrator | 2026-01-03 02:15:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:08.381278 | orchestrator | 2026-01-03 02:15:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:08.381377 | orchestrator | 2026-01-03 02:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:11.430935 | orchestrator | 2026-01-03 02:15:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:11.432982 | orchestrator | 2026-01-03 02:15:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:11.433451 | orchestrator | 2026-01-03 02:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:14.475692 | orchestrator | 2026-01-03 02:15:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:14.478662 | orchestrator | 2026-01-03 02:15:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:14.478731 | orchestrator | 2026-01-03 02:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:17.520462 | orchestrator | 2026-01-03 02:15:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:17.522551 | orchestrator | 2026-01-03 02:15:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:17.522617 | orchestrator | 2026-01-03 02:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:20.569205 | orchestrator | 2026-01-03 02:15:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:20.571283 | orchestrator | 2026-01-03 02:15:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:20.571457 | orchestrator | 2026-01-03 02:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:23.617249 | orchestrator | 2026-01-03 02:15:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:23.619533 | orchestrator | 2026-01-03 02:15:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:23.619591 | orchestrator | 2026-01-03 02:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:26.670174 | orchestrator | 2026-01-03 02:15:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:26.671680 | orchestrator | 2026-01-03 02:15:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:26.671741 | orchestrator | 2026-01-03 02:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:29.712990 | orchestrator | 2026-01-03 02:15:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:29.715244 | orchestrator | 2026-01-03 02:15:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:29.715339 | orchestrator | 2026-01-03 02:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:32.758767 | orchestrator | 2026-01-03 02:15:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:32.760968 | orchestrator | 2026-01-03 02:15:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:32.761035 | orchestrator | 2026-01-03 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:35.806104 | orchestrator | 2026-01-03 02:15:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:35.808490 | orchestrator | 2026-01-03 02:15:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:35.808572 | orchestrator | 2026-01-03 02:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:38.850408 | orchestrator | 2026-01-03 02:15:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:38.851406 | orchestrator | 2026-01-03 02:15:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:38.851464 | orchestrator | 2026-01-03 02:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:41.894447 | orchestrator | 2026-01-03 02:15:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:41.896406 | orchestrator | 2026-01-03 02:15:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:41.896658 | orchestrator | 2026-01-03 02:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:44.936385 | orchestrator | 2026-01-03 02:15:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:44.938899 | orchestrator | 2026-01-03 02:15:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:44.938951 | orchestrator | 2026-01-03 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:47.984825 | orchestrator | 2026-01-03 02:15:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:47.987178 | orchestrator | 2026-01-03 02:15:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:47.987281 | orchestrator | 2026-01-03 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:51.033188 | orchestrator | 2026-01-03 02:15:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:51.035048 | orchestrator | 2026-01-03 02:15:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:51.035148 | orchestrator | 2026-01-03 02:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:54.076529 | orchestrator | 2026-01-03 02:15:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:54.078211 | orchestrator | 2026-01-03 02:15:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:54.078324 | orchestrator | 2026-01-03 02:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:57.123169 | orchestrator | 2026-01-03 02:15:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:15:57.124977 | orchestrator | 2026-01-03 02:15:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:15:57.125048 | orchestrator | 2026-01-03 02:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:00.171040 | orchestrator | 2026-01-03 02:16:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:00.172342 | orchestrator | 2026-01-03 02:16:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:00.172428 | orchestrator | 2026-01-03 02:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:03.215364 | orchestrator | 2026-01-03 02:16:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:03.217883 | orchestrator | 2026-01-03 02:16:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:03.217975 | orchestrator | 2026-01-03 02:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:06.261777 | orchestrator | 2026-01-03 02:16:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:06.264508 | orchestrator | 2026-01-03 02:16:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:06.264619 | orchestrator | 2026-01-03 02:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:09.312209 | orchestrator | 2026-01-03 02:16:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:09.313753 | orchestrator | 2026-01-03 02:16:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:09.314060 | orchestrator | 2026-01-03 02:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:12.360230 | orchestrator | 2026-01-03 02:16:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:12.362173 | orchestrator | 2026-01-03 02:16:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:12.362241 | orchestrator | 2026-01-03 02:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:15.409120 | orchestrator | 2026-01-03 02:16:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:15.409944 | orchestrator | 2026-01-03 02:16:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:15.409977 | orchestrator | 2026-01-03 02:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:18.457678 | orchestrator | 2026-01-03 02:16:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:18.459448 | orchestrator | 2026-01-03 02:16:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:18.459535 | orchestrator | 2026-01-03 02:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:21.506887 | orchestrator | 2026-01-03 02:16:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:21.508467 | orchestrator | 2026-01-03 02:16:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:21.508512 | orchestrator | 2026-01-03 02:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:24.548292 | orchestrator | 2026-01-03 02:16:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:24.550430 | orchestrator | 2026-01-03 02:16:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:24.550481 | orchestrator | 2026-01-03 02:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:27.596534 | orchestrator | 2026-01-03 02:16:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:27.598949 | orchestrator | 2026-01-03 02:16:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:27.599001 | orchestrator | 2026-01-03 02:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:30.650707 | orchestrator | 2026-01-03 02:16:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:30.651894 | orchestrator | 2026-01-03 02:16:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:30.652001 | orchestrator | 2026-01-03 02:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:33.693076 | orchestrator | 2026-01-03 02:16:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:33.694827 | orchestrator | 2026-01-03 02:16:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:33.695239 | orchestrator | 2026-01-03 02:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:36.740799 | orchestrator | 2026-01-03 02:16:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:36.743063 | orchestrator | 2026-01-03 02:16:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:36.743153 | orchestrator | 2026-01-03 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:39.798379 | orchestrator | 2026-01-03 02:16:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:39.799749 | orchestrator | 2026-01-03 02:16:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:39.799808 | orchestrator | 2026-01-03 02:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:42.847465 | orchestrator | 2026-01-03 02:16:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:42.849351 | orchestrator | 2026-01-03 02:16:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:42.849511 | orchestrator | 2026-01-03 02:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:45.900476 | orchestrator | 2026-01-03 02:16:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:45.902692 | orchestrator | 2026-01-03 02:16:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:45.902785 | orchestrator | 2026-01-03 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:48.948917 | orchestrator | 2026-01-03 02:16:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:48.950497 | orchestrator | 2026-01-03 02:16:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:48.950545 | orchestrator | 2026-01-03 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:51.996505 | orchestrator | 2026-01-03 02:16:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:51.998488 | orchestrator | 2026-01-03 02:16:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:51.998881 | orchestrator | 2026-01-03 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:55.058241 | orchestrator | 2026-01-03 02:16:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:55.060039 | orchestrator | 2026-01-03 02:16:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:55.060268 | orchestrator | 2026-01-03 02:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:58.107407 | orchestrator | 2026-01-03 02:16:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:16:58.108535 | orchestrator | 2026-01-03 02:16:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:16:58.108593 | orchestrator | 2026-01-03 02:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:01.146651 | orchestrator | 2026-01-03 02:17:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:01.148593 | orchestrator | 2026-01-03 02:17:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:01.148710 | orchestrator | 2026-01-03 02:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:04.193017 | orchestrator | 2026-01-03 02:17:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:04.195098 | orchestrator | 2026-01-03 02:17:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:04.195143 | orchestrator | 2026-01-03 02:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:07.238487 | orchestrator | 2026-01-03 02:17:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:07.240445 | orchestrator | 2026-01-03 02:17:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:07.240960 | orchestrator | 2026-01-03 02:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:10.281865 | orchestrator | 2026-01-03 02:17:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:10.283740 | orchestrator | 2026-01-03 02:17:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:10.283783 | orchestrator | 2026-01-03 02:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:13.327961 | orchestrator | 2026-01-03 02:17:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:13.330890 | orchestrator | 2026-01-03 02:17:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:13.330971 | orchestrator | 2026-01-03 02:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:16.380806 | orchestrator | 2026-01-03 02:17:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:16.384484 | orchestrator | 2026-01-03 02:17:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:16.384612 | orchestrator | 2026-01-03 02:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:19.429926 | orchestrator | 2026-01-03 02:17:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:19.431349 | orchestrator | 2026-01-03 02:17:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:19.431491 | orchestrator | 2026-01-03 02:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:22.475226 | orchestrator | 2026-01-03 02:17:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:22.476974 | orchestrator | 2026-01-03 02:17:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:22.477070 | orchestrator | 2026-01-03 02:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:25.521766 | orchestrator | 2026-01-03 02:17:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:25.524896 | orchestrator | 2026-01-03 02:17:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:25.524959 | orchestrator | 2026-01-03 02:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:28.569305 | orchestrator | 2026-01-03 02:17:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:28.570314 | orchestrator | 2026-01-03 02:17:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:28.570393 | orchestrator | 2026-01-03 02:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:31.618541 | orchestrator | 2026-01-03 02:17:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:31.620916 | orchestrator | 2026-01-03 02:17:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:31.620990 | orchestrator | 2026-01-03 02:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:34.665781 | orchestrator | 2026-01-03 02:17:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:34.667518 | orchestrator | 2026-01-03 02:17:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:34.667582 | orchestrator | 2026-01-03 02:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:37.713991 | orchestrator | 2026-01-03 02:17:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:37.716702 | orchestrator | 2026-01-03 02:17:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:37.716771 | orchestrator | 2026-01-03 02:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:40.756908 | orchestrator | 2026-01-03 02:17:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:40.757929 | orchestrator | 2026-01-03 02:17:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:40.757979 | orchestrator | 2026-01-03 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:43.802102 | orchestrator | 2026-01-03 02:17:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:43.803841 | orchestrator | 2026-01-03 02:17:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:43.803900 | orchestrator | 2026-01-03 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:46.856824 | orchestrator | 2026-01-03 02:17:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:46.858393 | orchestrator | 2026-01-03 02:17:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:46.858441 | orchestrator | 2026-01-03 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:49.903157 | orchestrator | 2026-01-03 02:17:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:49.905573 | orchestrator | 2026-01-03 02:17:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:49.905646 | orchestrator | 2026-01-03 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:52.946097 | orchestrator | 2026-01-03 02:17:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:52.946856 | orchestrator | 2026-01-03 02:17:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:52.947270 | orchestrator | 2026-01-03 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:55.988181 | orchestrator | 2026-01-03 02:17:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:55.990062 | orchestrator | 2026-01-03 02:17:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:55.990210 | orchestrator | 2026-01-03 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:59.029369 | orchestrator | 2026-01-03 02:17:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:17:59.029901 | orchestrator | 2026-01-03 02:17:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:17:59.029923 | orchestrator | 2026-01-03 02:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:02.075431 | orchestrator | 2026-01-03 02:18:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:02.077781 | orchestrator | 2026-01-03 02:18:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:02.077852 | orchestrator | 2026-01-03 02:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:05.113010 | orchestrator | 2026-01-03 02:18:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:05.115082 | orchestrator | 2026-01-03 02:18:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:05.115192 | orchestrator | 2026-01-03 02:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:08.154396 | orchestrator | 2026-01-03 02:18:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:08.155840 | orchestrator | 2026-01-03 02:18:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:08.155966 | orchestrator | 2026-01-03 02:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:11.202411 | orchestrator | 2026-01-03 02:18:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:11.204031 | orchestrator | 2026-01-03 02:18:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:11.204109 | orchestrator | 2026-01-03 02:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:14.252579 | orchestrator | 2026-01-03 02:18:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:14.254727 | orchestrator | 2026-01-03 02:18:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:14.254790 | orchestrator | 2026-01-03 02:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:17.302839 | orchestrator | 2026-01-03 02:18:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:17.306169 | orchestrator | 2026-01-03 02:18:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:17.306232 | orchestrator | 2026-01-03 02:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:20.354500 | orchestrator | 2026-01-03 02:18:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:20.356644 | orchestrator | 2026-01-03 02:18:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:20.356693 | orchestrator | 2026-01-03 02:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:23.406783 | orchestrator | 2026-01-03 02:18:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:23.409936 | orchestrator | 2026-01-03 02:18:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:23.409996 | orchestrator | 2026-01-03 02:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:26.456151 | orchestrator | 2026-01-03 02:18:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:26.458251 | orchestrator | 2026-01-03 02:18:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:26.458430 | orchestrator | 2026-01-03 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:29.507724 | orchestrator | 2026-01-03 02:18:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:29.509418 | orchestrator | 2026-01-03 02:18:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:29.509489 | orchestrator | 2026-01-03 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:32.553128 | orchestrator | 2026-01-03 02:18:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:32.554372 | orchestrator | 2026-01-03 02:18:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:32.554432 | orchestrator | 2026-01-03 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:35.598324 | orchestrator | 2026-01-03 02:18:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:35.600728 | orchestrator | 2026-01-03 02:18:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:35.600781 | orchestrator | 2026-01-03 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:38.644792 | orchestrator | 2026-01-03 02:18:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:38.645655 | orchestrator | 2026-01-03 02:18:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:38.645783 | orchestrator | 2026-01-03 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:41.689779 | orchestrator | 2026-01-03 02:18:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:41.691209 | orchestrator | 2026-01-03 02:18:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:41.691241 | orchestrator | 2026-01-03 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:44.737171 | orchestrator | 2026-01-03 02:18:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:44.738915 | orchestrator | 2026-01-03 02:18:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:44.739014 | orchestrator | 2026-01-03 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:47.786290 | orchestrator | 2026-01-03 02:18:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:47.787215 | orchestrator | 2026-01-03 02:18:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:47.787251 | orchestrator | 2026-01-03 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:50.830147 | orchestrator | 2026-01-03 02:18:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:50.832165 | orchestrator | 2026-01-03 02:18:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:50.832289 | orchestrator | 2026-01-03 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:53.882072 | orchestrator | 2026-01-03 02:18:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:53.884154 | orchestrator | 2026-01-03 02:18:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:53.884215 | orchestrator | 2026-01-03 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:56.929460 | orchestrator | 2026-01-03 02:18:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:56.931319 | orchestrator | 2026-01-03 02:18:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:56.931512 | orchestrator | 2026-01-03 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:59.974885 | orchestrator | 2026-01-03 02:18:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:18:59.976548 | orchestrator | 2026-01-03 02:18:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:18:59.976609 | orchestrator | 2026-01-03 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:03.021135 | orchestrator | 2026-01-03 02:19:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:03.023817 | orchestrator | 2026-01-03 02:19:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:03.023892 | orchestrator | 2026-01-03 02:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:06.067551 | orchestrator | 2026-01-03 02:19:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:06.069698 | orchestrator | 2026-01-03 02:19:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:06.070057 | orchestrator | 2026-01-03 02:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:09.110251 | orchestrator | 2026-01-03 02:19:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:09.110367 | orchestrator | 2026-01-03 02:19:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:09.110376 | orchestrator | 2026-01-03 02:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:12.156217 | orchestrator | 2026-01-03 02:19:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:12.157478 | orchestrator | 2026-01-03 02:19:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:12.157535 | orchestrator | 2026-01-03 02:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:15.202856 | orchestrator | 2026-01-03 02:19:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:15.205693 | orchestrator | 2026-01-03 02:19:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:15.206150 | orchestrator | 2026-01-03 02:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:18.254810 | orchestrator | 2026-01-03 02:19:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:18.257208 | orchestrator | 2026-01-03 02:19:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:18.257272 | orchestrator | 2026-01-03 02:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:21.296689 | orchestrator | 2026-01-03 02:19:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:21.298796 | orchestrator | 2026-01-03 02:19:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:21.298863 | orchestrator | 2026-01-03 02:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:24.339748 | orchestrator | 2026-01-03 02:19:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:24.342905 | orchestrator | 2026-01-03 02:19:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:24.342944 | orchestrator | 2026-01-03 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:27.390206 | orchestrator | 2026-01-03 02:19:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:27.393423 | orchestrator | 2026-01-03 02:19:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:27.393493 | orchestrator | 2026-01-03 02:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:30.436890 | orchestrator | 2026-01-03 02:19:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:30.440145 | orchestrator | 2026-01-03 02:19:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:30.440218 | orchestrator | 2026-01-03 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:33.488333 | orchestrator | 2026-01-03 02:19:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:33.491892 | orchestrator | 2026-01-03 02:19:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:33.491956 | orchestrator | 2026-01-03 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:36.535612 | orchestrator | 2026-01-03 02:19:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:36.537360 | orchestrator | 2026-01-03 02:19:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:36.537421 | orchestrator | 2026-01-03 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:39.581890 | orchestrator | 2026-01-03 02:19:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:39.583302 | orchestrator | 2026-01-03 02:19:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:39.583395 | orchestrator | 2026-01-03 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:42.636229 | orchestrator | 2026-01-03 02:19:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:42.639544 | orchestrator | 2026-01-03 02:19:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:42.639678 | orchestrator | 2026-01-03 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:45.687184 | orchestrator | 2026-01-03 02:19:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:45.690152 | orchestrator | 2026-01-03 02:19:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:45.690205 | orchestrator | 2026-01-03 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:48.739056 | orchestrator | 2026-01-03 02:19:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:48.741941 | orchestrator | 2026-01-03 02:19:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:48.742082 | orchestrator | 2026-01-03 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:51.785975 | orchestrator | 2026-01-03 02:19:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:51.787785 | orchestrator | 2026-01-03 02:19:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:51.787851 | orchestrator | 2026-01-03 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:54.834602 | orchestrator | 2026-01-03 02:19:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:54.836545 | orchestrator | 2026-01-03 02:19:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:54.836622 | orchestrator | 2026-01-03 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:57.878670 | orchestrator | 2026-01-03 02:19:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:19:57.880489 | orchestrator | 2026-01-03 02:19:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:19:57.880917 | orchestrator | 2026-01-03 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:00.923058 | orchestrator | 2026-01-03 02:20:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:00.924507 | orchestrator | 2026-01-03 02:20:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:00.924673 | orchestrator | 2026-01-03 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:03.975490 | orchestrator | 2026-01-03 02:20:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:03.977151 | orchestrator | 2026-01-03 02:20:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:03.977196 | orchestrator | 2026-01-03 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:07.025194 | orchestrator | 2026-01-03 02:20:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:07.027163 | orchestrator | 2026-01-03 02:20:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:07.027214 | orchestrator | 2026-01-03 02:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:10.070651 | orchestrator | 2026-01-03 02:20:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:10.071939 | orchestrator | 2026-01-03 02:20:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:10.072022 | orchestrator | 2026-01-03 02:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:13.119131 | orchestrator | 2026-01-03 02:20:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:13.120628 | orchestrator | 2026-01-03 02:20:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:13.120682 | orchestrator | 2026-01-03 02:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:16.164768 | orchestrator | 2026-01-03 02:20:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:16.166646 | orchestrator | 2026-01-03 02:20:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:16.166714 | orchestrator | 2026-01-03 02:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:19.214194 | orchestrator | 2026-01-03 02:20:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:19.215986 | orchestrator | 2026-01-03 02:20:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:19.216053 | orchestrator | 2026-01-03 02:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:22.258330 | orchestrator | 2026-01-03 02:20:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:22.260489 | orchestrator | 2026-01-03 02:20:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:22.260549 | orchestrator | 2026-01-03 02:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:25.305253 | orchestrator | 2026-01-03 02:20:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:25.306680 | orchestrator | 2026-01-03 02:20:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:25.306747 | orchestrator | 2026-01-03 02:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:28.352293 | orchestrator | 2026-01-03 02:20:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:28.355154 | orchestrator | 2026-01-03 02:20:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:28.355270 | orchestrator | 2026-01-03 02:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:31.397016 | orchestrator | 2026-01-03 02:20:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:31.399248 | orchestrator | 2026-01-03 02:20:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:31.399311 | orchestrator | 2026-01-03 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:34.446782 | orchestrator | 2026-01-03 02:20:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:34.448776 | orchestrator | 2026-01-03 02:20:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:34.448858 | orchestrator | 2026-01-03 02:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:37.494514 | orchestrator | 2026-01-03 02:20:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:37.496222 | orchestrator | 2026-01-03 02:20:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:37.496416 | orchestrator | 2026-01-03 02:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:40.538508 | orchestrator | 2026-01-03 02:20:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:40.539333 | orchestrator | 2026-01-03 02:20:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:40.539397 | orchestrator | 2026-01-03 02:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:43.573379 | orchestrator | 2026-01-03 02:20:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:43.575084 | orchestrator | 2026-01-03 02:20:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:43.575209 | orchestrator | 2026-01-03 02:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:46.614915 | orchestrator | 2026-01-03 02:20:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:46.617882 | orchestrator | 2026-01-03 02:20:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:46.618979 | orchestrator | 2026-01-03 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:49.671022 | orchestrator | 2026-01-03 02:20:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:49.673125 | orchestrator | 2026-01-03 02:20:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:49.673168 | orchestrator | 2026-01-03 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:52.719713 | orchestrator | 2026-01-03 02:20:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:52.722484 | orchestrator | 2026-01-03 02:20:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:52.722555 | orchestrator | 2026-01-03 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:55.765786 | orchestrator | 2026-01-03 02:20:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:55.767113 | orchestrator | 2026-01-03 02:20:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:55.767164 | orchestrator | 2026-01-03 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:58.815057 | orchestrator | 2026-01-03 02:20:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:20:58.816872 | orchestrator | 2026-01-03 02:20:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:20:58.817051 | orchestrator | 2026-01-03 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:01.863257 | orchestrator | 2026-01-03 02:21:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:01.866522 | orchestrator | 2026-01-03 02:21:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:01.866605 | orchestrator | 2026-01-03 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:04.916247 | orchestrator | 2026-01-03 02:21:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:04.918199 | orchestrator | 2026-01-03 02:21:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:04.918272 | orchestrator | 2026-01-03 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:07.961077 | orchestrator | 2026-01-03 02:21:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:07.962634 | orchestrator | 2026-01-03 02:21:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:07.962808 | orchestrator | 2026-01-03 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:11.007040 | orchestrator | 2026-01-03 02:21:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:11.009217 | orchestrator | 2026-01-03 02:21:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:11.009298 | orchestrator | 2026-01-03 02:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:14.057261 | orchestrator | 2026-01-03 02:21:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:14.059233 | orchestrator | 2026-01-03 02:21:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:14.059375 | orchestrator | 2026-01-03 02:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:17.106809 | orchestrator | 2026-01-03 02:21:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:17.109139 | orchestrator | 2026-01-03 02:21:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:17.109242 | orchestrator | 2026-01-03 02:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:20.159750 | orchestrator | 2026-01-03 02:21:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:20.161321 | orchestrator | 2026-01-03 02:21:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:20.161474 | orchestrator | 2026-01-03 02:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:23.206002 | orchestrator | 2026-01-03 02:21:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:23.207300 | orchestrator | 2026-01-03 02:21:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:23.207341 | orchestrator | 2026-01-03 02:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:26.253694 | orchestrator | 2026-01-03 02:21:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:26.255795 | orchestrator | 2026-01-03 02:21:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:26.255853 | orchestrator | 2026-01-03 02:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:29.304679 | orchestrator | 2026-01-03 02:21:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:29.306404 | orchestrator | 2026-01-03 02:21:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:29.306610 | orchestrator | 2026-01-03 02:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:32.350798 | orchestrator | 2026-01-03 02:21:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:32.352766 | orchestrator | 2026-01-03 02:21:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:32.352893 | orchestrator | 2026-01-03 02:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:35.401161 | orchestrator | 2026-01-03 02:21:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:35.403336 | orchestrator | 2026-01-03 02:21:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:35.403450 | orchestrator | 2026-01-03 02:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:38.450073 | orchestrator | 2026-01-03 02:21:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:38.452012 | orchestrator | 2026-01-03 02:21:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:38.452138 | orchestrator | 2026-01-03 02:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:41.496776 | orchestrator | 2026-01-03 02:21:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:41.499205 | orchestrator | 2026-01-03 02:21:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:41.499271 | orchestrator | 2026-01-03 02:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:44.541828 | orchestrator | 2026-01-03 02:21:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:44.543241 | orchestrator | 2026-01-03 02:21:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:44.543300 | orchestrator | 2026-01-03 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:47.590935 | orchestrator | 2026-01-03 02:21:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:47.592108 | orchestrator | 2026-01-03 02:21:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:47.592158 | orchestrator | 2026-01-03 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:50.638732 | orchestrator | 2026-01-03 02:21:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:50.640737 | orchestrator | 2026-01-03 02:21:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:50.640788 | orchestrator | 2026-01-03 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:53.686780 | orchestrator | 2026-01-03 02:21:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:53.688164 | orchestrator | 2026-01-03 02:21:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:53.688256 | orchestrator | 2026-01-03 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:56.733128 | orchestrator | 2026-01-03 02:21:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:56.734756 | orchestrator | 2026-01-03 02:21:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:56.734809 | orchestrator | 2026-01-03 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:59.775573 | orchestrator | 2026-01-03 02:21:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:21:59.776976 | orchestrator | 2026-01-03 02:21:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:21:59.777083 | orchestrator | 2026-01-03 02:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:02.820551 | orchestrator | 2026-01-03 02:22:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:02.822179 | orchestrator | 2026-01-03 02:22:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:02.822241 | orchestrator | 2026-01-03 02:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:05.873315 | orchestrator | 2026-01-03 02:22:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:05.874995 | orchestrator | 2026-01-03 02:22:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:05.875070 | orchestrator | 2026-01-03 02:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:08.919670 | orchestrator | 2026-01-03 02:22:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:08.921178 | orchestrator | 2026-01-03 02:22:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:08.921299 | orchestrator | 2026-01-03 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:11.965487 | orchestrator | 2026-01-03 02:22:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:11.966884 | orchestrator | 2026-01-03 02:22:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:11.967098 | orchestrator | 2026-01-03 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:15.014225 | orchestrator | 2026-01-03 02:22:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:15.015591 | orchestrator | 2026-01-03 02:22:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:15.015837 | orchestrator | 2026-01-03 02:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:18.063278 | orchestrator | 2026-01-03 02:22:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:18.064221 | orchestrator | 2026-01-03 02:22:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:18.064255 | orchestrator | 2026-01-03 02:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:21.107754 | orchestrator | 2026-01-03 02:22:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:21.108886 | orchestrator | 2026-01-03 02:22:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:21.108992 | orchestrator | 2026-01-03 02:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:24.148051 | orchestrator | 2026-01-03 02:22:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:24.150209 | orchestrator | 2026-01-03 02:22:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:24.150280 | orchestrator | 2026-01-03 02:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:27.196603 | orchestrator | 2026-01-03 02:22:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:27.198214 | orchestrator | 2026-01-03 02:22:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:27.198256 | orchestrator | 2026-01-03 02:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:30.249109 | orchestrator | 2026-01-03 02:22:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:30.250632 | orchestrator | 2026-01-03 02:22:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:30.250787 | orchestrator | 2026-01-03 02:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:33.292917 | orchestrator | 2026-01-03 02:22:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:33.294731 | orchestrator | 2026-01-03 02:22:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:33.294786 | orchestrator | 2026-01-03 02:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:36.339842 | orchestrator | 2026-01-03 02:22:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:36.341369 | orchestrator | 2026-01-03 02:22:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:36.341415 | orchestrator | 2026-01-03 02:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:39.394227 | orchestrator | 2026-01-03 02:22:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:39.395553 | orchestrator | 2026-01-03 02:22:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:39.395816 | orchestrator | 2026-01-03 02:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:42.442567 | orchestrator | 2026-01-03 02:22:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:42.443947 | orchestrator | 2026-01-03 02:22:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:42.444002 | orchestrator | 2026-01-03 02:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:45.486288 | orchestrator | 2026-01-03 02:22:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:45.487399 | orchestrator | 2026-01-03 02:22:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:45.487444 | orchestrator | 2026-01-03 02:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:48.533712 | orchestrator | 2026-01-03 02:22:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:48.535580 | orchestrator | 2026-01-03 02:22:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:48.535706 | orchestrator | 2026-01-03 02:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:51.583214 | orchestrator | 2026-01-03 02:22:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:51.585776 | orchestrator | 2026-01-03 02:22:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:51.585958 | orchestrator | 2026-01-03 02:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:54.634876 | orchestrator | 2026-01-03 02:22:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:54.636772 | orchestrator | 2026-01-03 02:22:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:54.636854 | orchestrator | 2026-01-03 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:57.686284 | orchestrator | 2026-01-03 02:22:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:22:57.688045 | orchestrator | 2026-01-03 02:22:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:22:57.688136 | orchestrator | 2026-01-03 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:00.734260 | orchestrator | 2026-01-03 02:23:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:00.739417 | orchestrator | 2026-01-03 02:23:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:00.739497 | orchestrator | 2026-01-03 02:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:03.787799 | orchestrator | 2026-01-03 02:23:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:03.789510 | orchestrator | 2026-01-03 02:23:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:03.789657 | orchestrator | 2026-01-03 02:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:06.832943 | orchestrator | 2026-01-03 02:23:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:06.834958 | orchestrator | 2026-01-03 02:23:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:06.835042 | orchestrator | 2026-01-03 02:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:09.881324 | orchestrator | 2026-01-03 02:23:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:09.882811 | orchestrator | 2026-01-03 02:23:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:09.882920 | orchestrator | 2026-01-03 02:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:12.919702 | orchestrator | 2026-01-03 02:23:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:12.921431 | orchestrator | 2026-01-03 02:23:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:12.921540 | orchestrator | 2026-01-03 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:15.966477 | orchestrator | 2026-01-03 02:23:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:15.967872 | orchestrator | 2026-01-03 02:23:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:15.967928 | orchestrator | 2026-01-03 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:19.018104 | orchestrator | 2026-01-03 02:23:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:19.019305 | orchestrator | 2026-01-03 02:23:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:19.019510 | orchestrator | 2026-01-03 02:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:22.069707 | orchestrator | 2026-01-03 02:23:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:22.071010 | orchestrator | 2026-01-03 02:23:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:22.071051 | orchestrator | 2026-01-03 02:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:25.112773 | orchestrator | 2026-01-03 02:23:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:25.114874 | orchestrator | 2026-01-03 02:23:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:25.114935 | orchestrator | 2026-01-03 02:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:28.158559 | orchestrator | 2026-01-03 02:23:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:28.161068 | orchestrator | 2026-01-03 02:23:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:28.161301 | orchestrator | 2026-01-03 02:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:31.207779 | orchestrator | 2026-01-03 02:23:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:31.209615 | orchestrator | 2026-01-03 02:23:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:31.209752 | orchestrator | 2026-01-03 02:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:34.252192 | orchestrator | 2026-01-03 02:23:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:34.254422 | orchestrator | 2026-01-03 02:23:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:34.254500 | orchestrator | 2026-01-03 02:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:37.298124 | orchestrator | 2026-01-03 02:23:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:37.300456 | orchestrator | 2026-01-03 02:23:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:37.300711 | orchestrator | 2026-01-03 02:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:40.351758 | orchestrator | 2026-01-03 02:23:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:40.353601 | orchestrator | 2026-01-03 02:23:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:40.353682 | orchestrator | 2026-01-03 02:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:43.398547 | orchestrator | 2026-01-03 02:23:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:43.401014 | orchestrator | 2026-01-03 02:23:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:43.401143 | orchestrator | 2026-01-03 02:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:46.444860 | orchestrator | 2026-01-03 02:23:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:46.446716 | orchestrator | 2026-01-03 02:23:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:46.446765 | orchestrator | 2026-01-03 02:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:49.495210 | orchestrator | 2026-01-03 02:23:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:49.496264 | orchestrator | 2026-01-03 02:23:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:49.497128 | orchestrator | 2026-01-03 02:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:52.542849 | orchestrator | 2026-01-03 02:23:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:52.543941 | orchestrator | 2026-01-03 02:23:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:52.543993 | orchestrator | 2026-01-03 02:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:55.588423 | orchestrator | 2026-01-03 02:23:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:55.590156 | orchestrator | 2026-01-03 02:23:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:55.590246 | orchestrator | 2026-01-03 02:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:58.634878 | orchestrator | 2026-01-03 02:23:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:23:58.637205 | orchestrator | 2026-01-03 02:23:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:23:58.637286 | orchestrator | 2026-01-03 02:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:01.684482 | orchestrator | 2026-01-03 02:24:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:01.685235 | orchestrator | 2026-01-03 02:24:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:01.685272 | orchestrator | 2026-01-03 02:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:04.727690 | orchestrator | 2026-01-03 02:24:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:04.728966 | orchestrator | 2026-01-03 02:24:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:04.729047 | orchestrator | 2026-01-03 02:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:07.774999 | orchestrator | 2026-01-03 02:24:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:07.777482 | orchestrator | 2026-01-03 02:24:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:07.777548 | orchestrator | 2026-01-03 02:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:10.826071 | orchestrator | 2026-01-03 02:24:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:10.827523 | orchestrator | 2026-01-03 02:24:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:10.827666 | orchestrator | 2026-01-03 02:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:13.877978 | orchestrator | 2026-01-03 02:24:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:13.879820 | orchestrator | 2026-01-03 02:24:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:13.879935 | orchestrator | 2026-01-03 02:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:16.920704 | orchestrator | 2026-01-03 02:24:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:16.922850 | orchestrator | 2026-01-03 02:24:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:16.922993 | orchestrator | 2026-01-03 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:19.973122 | orchestrator | 2026-01-03 02:24:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:19.976267 | orchestrator | 2026-01-03 02:24:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:19.976335 | orchestrator | 2026-01-03 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:23.026065 | orchestrator | 2026-01-03 02:24:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:23.027192 | orchestrator | 2026-01-03 02:24:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:23.028749 | orchestrator | 2026-01-03 02:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:26.073741 | orchestrator | 2026-01-03 02:24:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:26.075459 | orchestrator | 2026-01-03 02:24:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:26.075516 | orchestrator | 2026-01-03 02:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:29.119093 | orchestrator | 2026-01-03 02:24:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:29.120733 | orchestrator | 2026-01-03 02:24:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:29.120791 | orchestrator | 2026-01-03 02:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:32.167025 | orchestrator | 2026-01-03 02:24:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:32.169553 | orchestrator | 2026-01-03 02:24:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:32.169661 | orchestrator | 2026-01-03 02:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:35.216984 | orchestrator | 2026-01-03 02:24:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:35.218694 | orchestrator | 2026-01-03 02:24:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:35.219081 | orchestrator | 2026-01-03 02:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:38.265363 | orchestrator | 2026-01-03 02:24:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:38.267743 | orchestrator | 2026-01-03 02:24:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:38.267815 | orchestrator | 2026-01-03 02:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:41.312842 | orchestrator | 2026-01-03 02:24:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:41.313961 | orchestrator | 2026-01-03 02:24:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:41.314256 | orchestrator | 2026-01-03 02:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:44.361793 | orchestrator | 2026-01-03 02:24:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:44.362616 | orchestrator | 2026-01-03 02:24:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:44.362889 | orchestrator | 2026-01-03 02:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:47.407489 | orchestrator | 2026-01-03 02:24:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:47.408873 | orchestrator | 2026-01-03 02:24:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:47.408955 | orchestrator | 2026-01-03 02:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:50.456337 | orchestrator | 2026-01-03 02:24:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:50.457714 | orchestrator | 2026-01-03 02:24:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:50.457947 | orchestrator | 2026-01-03 02:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:53.504087 | orchestrator | 2026-01-03 02:24:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:53.505513 | orchestrator | 2026-01-03 02:24:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:53.505561 | orchestrator | 2026-01-03 02:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:56.556068 | orchestrator | 2026-01-03 02:24:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:56.558900 | orchestrator | 2026-01-03 02:24:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:56.558951 | orchestrator | 2026-01-03 02:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:59.602630 | orchestrator | 2026-01-03 02:24:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:24:59.604951 | orchestrator | 2026-01-03 02:24:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:24:59.605039 | orchestrator | 2026-01-03 02:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:02.645867 | orchestrator | 2026-01-03 02:25:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:02.647248 | orchestrator | 2026-01-03 02:25:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:02.647306 | orchestrator | 2026-01-03 02:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:05.695248 | orchestrator | 2026-01-03 02:25:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:05.697730 | orchestrator | 2026-01-03 02:25:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:05.697932 | orchestrator | 2026-01-03 02:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:08.744269 | orchestrator | 2026-01-03 02:25:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:08.745329 | orchestrator | 2026-01-03 02:25:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:08.745487 | orchestrator | 2026-01-03 02:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:11.792125 | orchestrator | 2026-01-03 02:25:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:11.793517 | orchestrator | 2026-01-03 02:25:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:11.793597 | orchestrator | 2026-01-03 02:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:14.839686 | orchestrator | 2026-01-03 02:25:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:14.841010 | orchestrator | 2026-01-03 02:25:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:14.841093 | orchestrator | 2026-01-03 02:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:17.884773 | orchestrator | 2026-01-03 02:25:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:17.886413 | orchestrator | 2026-01-03 02:25:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:17.886537 | orchestrator | 2026-01-03 02:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:20.933639 | orchestrator | 2026-01-03 02:25:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:20.935369 | orchestrator | 2026-01-03 02:25:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:20.935424 | orchestrator | 2026-01-03 02:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:23.981191 | orchestrator | 2026-01-03 02:25:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:23.982331 | orchestrator | 2026-01-03 02:25:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:23.982472 | orchestrator | 2026-01-03 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:27.026766 | orchestrator | 2026-01-03 02:25:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:27.028845 | orchestrator | 2026-01-03 02:25:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:27.028921 | orchestrator | 2026-01-03 02:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:30.070748 | orchestrator | 2026-01-03 02:25:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:30.072823 | orchestrator | 2026-01-03 02:25:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:30.072913 | orchestrator | 2026-01-03 02:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:33.119825 | orchestrator | 2026-01-03 02:25:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:33.121841 | orchestrator | 2026-01-03 02:25:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:33.121981 | orchestrator | 2026-01-03 02:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:36.169141 | orchestrator | 2026-01-03 02:25:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:36.171271 | orchestrator | 2026-01-03 02:25:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:36.171334 | orchestrator | 2026-01-03 02:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:39.215473 | orchestrator | 2026-01-03 02:25:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:39.216124 | orchestrator | 2026-01-03 02:25:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:39.216213 | orchestrator | 2026-01-03 02:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:42.268009 | orchestrator | 2026-01-03 02:25:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:42.269377 | orchestrator | 2026-01-03 02:25:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:42.269498 | orchestrator | 2026-01-03 02:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:45.316503 | orchestrator | 2026-01-03 02:25:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:45.319181 | orchestrator | 2026-01-03 02:25:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:45.319236 | orchestrator | 2026-01-03 02:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:48.366244 | orchestrator | 2026-01-03 02:25:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:48.368865 | orchestrator | 2026-01-03 02:25:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:48.368946 | orchestrator | 2026-01-03 02:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:51.424084 | orchestrator | 2026-01-03 02:25:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:51.425914 | orchestrator | 2026-01-03 02:25:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:51.426309 | orchestrator | 2026-01-03 02:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:54.481618 | orchestrator | 2026-01-03 02:25:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:54.482916 | orchestrator | 2026-01-03 02:25:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:54.482958 | orchestrator | 2026-01-03 02:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:57.529834 | orchestrator | 2026-01-03 02:25:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:25:57.531234 | orchestrator | 2026-01-03 02:25:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:25:57.531391 | orchestrator | 2026-01-03 02:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:00.578298 | orchestrator | 2026-01-03 02:26:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:00.581666 | orchestrator | 2026-01-03 02:26:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:00.581740 | orchestrator | 2026-01-03 02:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:03.633009 | orchestrator | 2026-01-03 02:26:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:03.635129 | orchestrator | 2026-01-03 02:26:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:03.635240 | orchestrator | 2026-01-03 02:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:06.685060 | orchestrator | 2026-01-03 02:26:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:06.686851 | orchestrator | 2026-01-03 02:26:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:06.686918 | orchestrator | 2026-01-03 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:09.733711 | orchestrator | 2026-01-03 02:26:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:09.735493 | orchestrator | 2026-01-03 02:26:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:09.735664 | orchestrator | 2026-01-03 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:12.781515 | orchestrator | 2026-01-03 02:26:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:12.782317 | orchestrator | 2026-01-03 02:26:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:12.782414 | orchestrator | 2026-01-03 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:15.829634 | orchestrator | 2026-01-03 02:26:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:15.832021 | orchestrator | 2026-01-03 02:26:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:15.832088 | orchestrator | 2026-01-03 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:18.876639 | orchestrator | 2026-01-03 02:26:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:18.878904 | orchestrator | 2026-01-03 02:26:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:18.879022 | orchestrator | 2026-01-03 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:21.919653 | orchestrator | 2026-01-03 02:26:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:21.921553 | orchestrator | 2026-01-03 02:26:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:21.921603 | orchestrator | 2026-01-03 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:24.963575 | orchestrator | 2026-01-03 02:26:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:24.964562 | orchestrator | 2026-01-03 02:26:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:24.964603 | orchestrator | 2026-01-03 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:28.012756 | orchestrator | 2026-01-03 02:26:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:28.014180 | orchestrator | 2026-01-03 02:26:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:28.014259 | orchestrator | 2026-01-03 02:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:31.057447 | orchestrator | 2026-01-03 02:26:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:31.059454 | orchestrator | 2026-01-03 02:26:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:31.059508 | orchestrator | 2026-01-03 02:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:34.103505 | orchestrator | 2026-01-03 02:26:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:34.106574 | orchestrator | 2026-01-03 02:26:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:34.106678 | orchestrator | 2026-01-03 02:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:37.153124 | orchestrator | 2026-01-03 02:26:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:37.155668 | orchestrator | 2026-01-03 02:26:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:37.155789 | orchestrator | 2026-01-03 02:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:40.195798 | orchestrator | 2026-01-03 02:26:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:40.197056 | orchestrator | 2026-01-03 02:26:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:40.197157 | orchestrator | 2026-01-03 02:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:43.247824 | orchestrator | 2026-01-03 02:26:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:43.249506 | orchestrator | 2026-01-03 02:26:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:43.249583 | orchestrator | 2026-01-03 02:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:46.294496 | orchestrator | 2026-01-03 02:26:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:46.296702 | orchestrator | 2026-01-03 02:26:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:46.296849 | orchestrator | 2026-01-03 02:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:49.346449 | orchestrator | 2026-01-03 02:26:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:49.348060 | orchestrator | 2026-01-03 02:26:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:49.348449 | orchestrator | 2026-01-03 02:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:52.394120 | orchestrator | 2026-01-03 02:26:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:52.395969 | orchestrator | 2026-01-03 02:26:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:52.396005 | orchestrator | 2026-01-03 02:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:55.440629 | orchestrator | 2026-01-03 02:26:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:55.441465 | orchestrator | 2026-01-03 02:26:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:55.441501 | orchestrator | 2026-01-03 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:58.488511 | orchestrator | 2026-01-03 02:26:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:26:58.490255 | orchestrator | 2026-01-03 02:26:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:26:58.490422 | orchestrator | 2026-01-03 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:01.537434 | orchestrator | 2026-01-03 02:27:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:01.538474 | orchestrator | 2026-01-03 02:27:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:01.538514 | orchestrator | 2026-01-03 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:04.582261 | orchestrator | 2026-01-03 02:27:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:04.584985 | orchestrator | 2026-01-03 02:27:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:04.585071 | orchestrator | 2026-01-03 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:07.633861 | orchestrator | 2026-01-03 02:27:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:07.634683 | orchestrator | 2026-01-03 02:27:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:07.634797 | orchestrator | 2026-01-03 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:10.683648 | orchestrator | 2026-01-03 02:27:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:10.685265 | orchestrator | 2026-01-03 02:27:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:10.685463 | orchestrator | 2026-01-03 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:13.735775 | orchestrator | 2026-01-03 02:27:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:13.738493 | orchestrator | 2026-01-03 02:27:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:13.738567 | orchestrator | 2026-01-03 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:16.781715 | orchestrator | 2026-01-03 02:27:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:16.783468 | orchestrator | 2026-01-03 02:27:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:16.783523 | orchestrator | 2026-01-03 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:19.829391 | orchestrator | 2026-01-03 02:27:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:19.831118 | orchestrator | 2026-01-03 02:27:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:19.831397 | orchestrator | 2026-01-03 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:22.874908 | orchestrator | 2026-01-03 02:27:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:22.877427 | orchestrator | 2026-01-03 02:27:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:22.877478 | orchestrator | 2026-01-03 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:25.919062 | orchestrator | 2026-01-03 02:27:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:25.920928 | orchestrator | 2026-01-03 02:27:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:25.920963 | orchestrator | 2026-01-03 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:28.963988 | orchestrator | 2026-01-03 02:27:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:28.965850 | orchestrator | 2026-01-03 02:27:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:28.966060 | orchestrator | 2026-01-03 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:32.015286 | orchestrator | 2026-01-03 02:27:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:32.016102 | orchestrator | 2026-01-03 02:27:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:32.016134 | orchestrator | 2026-01-03 02:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:35.074643 | orchestrator | 2026-01-03 02:27:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:35.076883 | orchestrator | 2026-01-03 02:27:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:35.076964 | orchestrator | 2026-01-03 02:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:38.121757 | orchestrator | 2026-01-03 02:27:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:38.124202 | orchestrator | 2026-01-03 02:27:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:38.124260 | orchestrator | 2026-01-03 02:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:41.169309 | orchestrator | 2026-01-03 02:27:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:41.171793 | orchestrator | 2026-01-03 02:27:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:41.171908 | orchestrator | 2026-01-03 02:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:44.221068 | orchestrator | 2026-01-03 02:27:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:44.222742 | orchestrator | 2026-01-03 02:27:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:44.222799 | orchestrator | 2026-01-03 02:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:47.273749 | orchestrator | 2026-01-03 02:27:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:47.276124 | orchestrator | 2026-01-03 02:27:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:47.276178 | orchestrator | 2026-01-03 02:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:50.321719 | orchestrator | 2026-01-03 02:27:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:50.323117 | orchestrator | 2026-01-03 02:27:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:50.323153 | orchestrator | 2026-01-03 02:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:53.375193 | orchestrator | 2026-01-03 02:27:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:53.378815 | orchestrator | 2026-01-03 02:27:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:53.378870 | orchestrator | 2026-01-03 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:56.426674 | orchestrator | 2026-01-03 02:27:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:56.430184 | orchestrator | 2026-01-03 02:27:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:56.430325 | orchestrator | 2026-01-03 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:59.481039 | orchestrator | 2026-01-03 02:27:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:27:59.482696 | orchestrator | 2026-01-03 02:27:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:27:59.482750 | orchestrator | 2026-01-03 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:02.533421 | orchestrator | 2026-01-03 02:28:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:02.535213 | orchestrator | 2026-01-03 02:28:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:02.535308 | orchestrator | 2026-01-03 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:05.590723 | orchestrator | 2026-01-03 02:28:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:05.592010 | orchestrator | 2026-01-03 02:28:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:05.592116 | orchestrator | 2026-01-03 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:08.643616 | orchestrator | 2026-01-03 02:28:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:08.645305 | orchestrator | 2026-01-03 02:28:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:08.645446 | orchestrator | 2026-01-03 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:11.691094 | orchestrator | 2026-01-03 02:28:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:11.693152 | orchestrator | 2026-01-03 02:28:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:11.693207 | orchestrator | 2026-01-03 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:14.745042 | orchestrator | 2026-01-03 02:28:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:14.746900 | orchestrator | 2026-01-03 02:28:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:14.746962 | orchestrator | 2026-01-03 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:17.798261 | orchestrator | 2026-01-03 02:28:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:17.800768 | orchestrator | 2026-01-03 02:28:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:17.800818 | orchestrator | 2026-01-03 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:20.848149 | orchestrator | 2026-01-03 02:28:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:20.850613 | orchestrator | 2026-01-03 02:28:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:20.850698 | orchestrator | 2026-01-03 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:23.897711 | orchestrator | 2026-01-03 02:28:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:23.899123 | orchestrator | 2026-01-03 02:28:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:23.899187 | orchestrator | 2026-01-03 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:26.947524 | orchestrator | 2026-01-03 02:28:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:26.949183 | orchestrator | 2026-01-03 02:28:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:26.949405 | orchestrator | 2026-01-03 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:30.000927 | orchestrator | 2026-01-03 02:28:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:30.002799 | orchestrator | 2026-01-03 02:28:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:30.002876 | orchestrator | 2026-01-03 02:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:33.054993 | orchestrator | 2026-01-03 02:28:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:33.055535 | orchestrator | 2026-01-03 02:28:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:33.055590 | orchestrator | 2026-01-03 02:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:36.101661 | orchestrator | 2026-01-03 02:28:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:36.104019 | orchestrator | 2026-01-03 02:28:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:36.104657 | orchestrator | 2026-01-03 02:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:39.154476 | orchestrator | 2026-01-03 02:28:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:39.156952 | orchestrator | 2026-01-03 02:28:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:39.157021 | orchestrator | 2026-01-03 02:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:42.206423 | orchestrator | 2026-01-03 02:28:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:42.208654 | orchestrator | 2026-01-03 02:28:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:42.208717 | orchestrator | 2026-01-03 02:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:45.253508 | orchestrator | 2026-01-03 02:28:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:45.254862 | orchestrator | 2026-01-03 02:28:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:45.254934 | orchestrator | 2026-01-03 02:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:48.300767 | orchestrator | 2026-01-03 02:28:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:48.301962 | orchestrator | 2026-01-03 02:28:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:48.301996 | orchestrator | 2026-01-03 02:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:51.348785 | orchestrator | 2026-01-03 02:28:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:51.352001 | orchestrator | 2026-01-03 02:28:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:51.352084 | orchestrator | 2026-01-03 02:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:54.390896 | orchestrator | 2026-01-03 02:28:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:54.391821 | orchestrator | 2026-01-03 02:28:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:54.391937 | orchestrator | 2026-01-03 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:57.432012 | orchestrator | 2026-01-03 02:28:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:28:57.434168 | orchestrator | 2026-01-03 02:28:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:28:57.434250 | orchestrator | 2026-01-03 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:00.482965 | orchestrator | 2026-01-03 02:29:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:00.484831 | orchestrator | 2026-01-03 02:29:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:00.484954 | orchestrator | 2026-01-03 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:03.529647 | orchestrator | 2026-01-03 02:29:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:03.531400 | orchestrator | 2026-01-03 02:29:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:03.531444 | orchestrator | 2026-01-03 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:06.577624 | orchestrator | 2026-01-03 02:29:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:06.580598 | orchestrator | 2026-01-03 02:29:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:06.580665 | orchestrator | 2026-01-03 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:09.624783 | orchestrator | 2026-01-03 02:29:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:09.625993 | orchestrator | 2026-01-03 02:29:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:09.626095 | orchestrator | 2026-01-03 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:12.670816 | orchestrator | 2026-01-03 02:29:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:12.672646 | orchestrator | 2026-01-03 02:29:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:12.672728 | orchestrator | 2026-01-03 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:15.723461 | orchestrator | 2026-01-03 02:29:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:15.725681 | orchestrator | 2026-01-03 02:29:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:15.725795 | orchestrator | 2026-01-03 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:18.771640 | orchestrator | 2026-01-03 02:29:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:18.773491 | orchestrator | 2026-01-03 02:29:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:18.773546 | orchestrator | 2026-01-03 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:21.819844 | orchestrator | 2026-01-03 02:29:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:21.821377 | orchestrator | 2026-01-03 02:29:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:21.821443 | orchestrator | 2026-01-03 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:24.864620 | orchestrator | 2026-01-03 02:29:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:24.867140 | orchestrator | 2026-01-03 02:29:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:24.867199 | orchestrator | 2026-01-03 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:27.912549 | orchestrator | 2026-01-03 02:29:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:27.914661 | orchestrator | 2026-01-03 02:29:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:27.914732 | orchestrator | 2026-01-03 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:30.959953 | orchestrator | 2026-01-03 02:29:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:30.961722 | orchestrator | 2026-01-03 02:29:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:30.961794 | orchestrator | 2026-01-03 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:34.006491 | orchestrator | 2026-01-03 02:29:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:34.006691 | orchestrator | 2026-01-03 02:29:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:34.006810 | orchestrator | 2026-01-03 02:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:37.051172 | orchestrator | 2026-01-03 02:29:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:37.052913 | orchestrator | 2026-01-03 02:29:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:37.052965 | orchestrator | 2026-01-03 02:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:40.093805 | orchestrator | 2026-01-03 02:29:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:40.095717 | orchestrator | 2026-01-03 02:29:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:40.095971 | orchestrator | 2026-01-03 02:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:43.136212 | orchestrator | 2026-01-03 02:29:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:43.137076 | orchestrator | 2026-01-03 02:29:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:43.137130 | orchestrator | 2026-01-03 02:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:46.180837 | orchestrator | 2026-01-03 02:29:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:46.182086 | orchestrator | 2026-01-03 02:29:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:46.182177 | orchestrator | 2026-01-03 02:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:49.230181 | orchestrator | 2026-01-03 02:29:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:49.231474 | orchestrator | 2026-01-03 02:29:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:49.231518 | orchestrator | 2026-01-03 02:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:52.284065 | orchestrator | 2026-01-03 02:29:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:52.286167 | orchestrator | 2026-01-03 02:29:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:52.286216 | orchestrator | 2026-01-03 02:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:55.332803 | orchestrator | 2026-01-03 02:29:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:55.336709 | orchestrator | 2026-01-03 02:29:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:55.336792 | orchestrator | 2026-01-03 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:58.389114 | orchestrator | 2026-01-03 02:29:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:29:58.390962 | orchestrator | 2026-01-03 02:29:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:29:58.391064 | orchestrator | 2026-01-03 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:01.441263 | orchestrator | 2026-01-03 02:30:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:01.444095 | orchestrator | 2026-01-03 02:30:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:01.444215 | orchestrator | 2026-01-03 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:04.489262 | orchestrator | 2026-01-03 02:30:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:04.490675 | orchestrator | 2026-01-03 02:30:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:04.490770 | orchestrator | 2026-01-03 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:07.528527 | orchestrator | 2026-01-03 02:30:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:07.530097 | orchestrator | 2026-01-03 02:30:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:07.530140 | orchestrator | 2026-01-03 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:10.578167 | orchestrator | 2026-01-03 02:30:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:10.578242 | orchestrator | 2026-01-03 02:30:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:10.578250 | orchestrator | 2026-01-03 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:13.623703 | orchestrator | 2026-01-03 02:30:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:13.625668 | orchestrator | 2026-01-03 02:30:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:13.625753 | orchestrator | 2026-01-03 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:16.675267 | orchestrator | 2026-01-03 02:30:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:16.677341 | orchestrator | 2026-01-03 02:30:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:16.677396 | orchestrator | 2026-01-03 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:19.738430 | orchestrator | 2026-01-03 02:30:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:19.738505 | orchestrator | 2026-01-03 02:30:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:19.738511 | orchestrator | 2026-01-03 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:22.766294 | orchestrator | 2026-01-03 02:30:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:22.768646 | orchestrator | 2026-01-03 02:30:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:22.768758 | orchestrator | 2026-01-03 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:25.814940 | orchestrator | 2026-01-03 02:30:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:25.816978 | orchestrator | 2026-01-03 02:30:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:25.817033 | orchestrator | 2026-01-03 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:28.864484 | orchestrator | 2026-01-03 02:30:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:28.866349 | orchestrator | 2026-01-03 02:30:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:28.866472 | orchestrator | 2026-01-03 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:31.912578 | orchestrator | 2026-01-03 02:30:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:31.914793 | orchestrator | 2026-01-03 02:30:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:31.915029 | orchestrator | 2026-01-03 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:34.960718 | orchestrator | 2026-01-03 02:30:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:34.962446 | orchestrator | 2026-01-03 02:30:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:34.962503 | orchestrator | 2026-01-03 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:38.012152 | orchestrator | 2026-01-03 02:30:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:38.013683 | orchestrator | 2026-01-03 02:30:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:38.013802 | orchestrator | 2026-01-03 02:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:41.055652 | orchestrator | 2026-01-03 02:30:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:41.056535 | orchestrator | 2026-01-03 02:30:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:41.056748 | orchestrator | 2026-01-03 02:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:44.089105 | orchestrator | 2026-01-03 02:30:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:44.090747 | orchestrator | 2026-01-03 02:30:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:44.090835 | orchestrator | 2026-01-03 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:47.138232 | orchestrator | 2026-01-03 02:30:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:47.139106 | orchestrator | 2026-01-03 02:30:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:47.139141 | orchestrator | 2026-01-03 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:50.186309 | orchestrator | 2026-01-03 02:30:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:50.187265 | orchestrator | 2026-01-03 02:30:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:50.187665 | orchestrator | 2026-01-03 02:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:53.233290 | orchestrator | 2026-01-03 02:30:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:53.235025 | orchestrator | 2026-01-03 02:30:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:53.235263 | orchestrator | 2026-01-03 02:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:56.269877 | orchestrator | 2026-01-03 02:30:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:56.271667 | orchestrator | 2026-01-03 02:30:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:56.271739 | orchestrator | 2026-01-03 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:59.317245 | orchestrator | 2026-01-03 02:30:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:30:59.318856 | orchestrator | 2026-01-03 02:30:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:30:59.318938 | orchestrator | 2026-01-03 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:02.371778 | orchestrator | 2026-01-03 02:31:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:02.372363 | orchestrator | 2026-01-03 02:31:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:02.372418 | orchestrator | 2026-01-03 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:05.411789 | orchestrator | 2026-01-03 02:31:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:05.412776 | orchestrator | 2026-01-03 02:31:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:05.413088 | orchestrator | 2026-01-03 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:08.458555 | orchestrator | 2026-01-03 02:31:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:08.460599 | orchestrator | 2026-01-03 02:31:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:08.460636 | orchestrator | 2026-01-03 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:11.504610 | orchestrator | 2026-01-03 02:31:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:11.506107 | orchestrator | 2026-01-03 02:31:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:11.506227 | orchestrator | 2026-01-03 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:14.548882 | orchestrator | 2026-01-03 02:31:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:14.550392 | orchestrator | 2026-01-03 02:31:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:14.550430 | orchestrator | 2026-01-03 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:17.601127 | orchestrator | 2026-01-03 02:31:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:17.602223 | orchestrator | 2026-01-03 02:31:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:17.602282 | orchestrator | 2026-01-03 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:20.645781 | orchestrator | 2026-01-03 02:31:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:20.646781 | orchestrator | 2026-01-03 02:31:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:20.646852 | orchestrator | 2026-01-03 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:23.696998 | orchestrator | 2026-01-03 02:31:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:23.699744 | orchestrator | 2026-01-03 02:31:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:23.699796 | orchestrator | 2026-01-03 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:26.743883 | orchestrator | 2026-01-03 02:31:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:26.745878 | orchestrator | 2026-01-03 02:31:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:26.745956 | orchestrator | 2026-01-03 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:29.791053 | orchestrator | 2026-01-03 02:31:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:29.792673 | orchestrator | 2026-01-03 02:31:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:29.792715 | orchestrator | 2026-01-03 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:32.836693 | orchestrator | 2026-01-03 02:31:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:32.838424 | orchestrator | 2026-01-03 02:31:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:32.838508 | orchestrator | 2026-01-03 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:35.884223 | orchestrator | 2026-01-03 02:31:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:35.885179 | orchestrator | 2026-01-03 02:31:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:35.885224 | orchestrator | 2026-01-03 02:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:38.929098 | orchestrator | 2026-01-03 02:31:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:38.930248 | orchestrator | 2026-01-03 02:31:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:38.930298 | orchestrator | 2026-01-03 02:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:41.979499 | orchestrator | 2026-01-03 02:31:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:41.980921 | orchestrator | 2026-01-03 02:31:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:41.980942 | orchestrator | 2026-01-03 02:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:45.026727 | orchestrator | 2026-01-03 02:31:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:45.028840 | orchestrator | 2026-01-03 02:31:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:45.028962 | orchestrator | 2026-01-03 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:48.074793 | orchestrator | 2026-01-03 02:31:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:48.075735 | orchestrator | 2026-01-03 02:31:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:48.075759 | orchestrator | 2026-01-03 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:51.123679 | orchestrator | 2026-01-03 02:31:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:51.125074 | orchestrator | 2026-01-03 02:31:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:51.125181 | orchestrator | 2026-01-03 02:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:54.167583 | orchestrator | 2026-01-03 02:31:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:54.170175 | orchestrator | 2026-01-03 02:31:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:54.170205 | orchestrator | 2026-01-03 02:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:57.218788 | orchestrator | 2026-01-03 02:31:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:31:57.219483 | orchestrator | 2026-01-03 02:31:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:31:57.219522 | orchestrator | 2026-01-03 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:00.261104 | orchestrator | 2026-01-03 02:32:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:00.261521 | orchestrator | 2026-01-03 02:32:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:00.261555 | orchestrator | 2026-01-03 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:03.304721 | orchestrator | 2026-01-03 02:32:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:03.306711 | orchestrator | 2026-01-03 02:32:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:03.306758 | orchestrator | 2026-01-03 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:06.348087 | orchestrator | 2026-01-03 02:32:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:06.351229 | orchestrator | 2026-01-03 02:32:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:06.351402 | orchestrator | 2026-01-03 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:09.393289 | orchestrator | 2026-01-03 02:32:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:09.394749 | orchestrator | 2026-01-03 02:32:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:09.394812 | orchestrator | 2026-01-03 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:12.436892 | orchestrator | 2026-01-03 02:32:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:12.438001 | orchestrator | 2026-01-03 02:32:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:12.438055 | orchestrator | 2026-01-03 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:15.494733 | orchestrator | 2026-01-03 02:32:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:15.496008 | orchestrator | 2026-01-03 02:32:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:15.496051 | orchestrator | 2026-01-03 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:18.549781 | orchestrator | 2026-01-03 02:32:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:18.550849 | orchestrator | 2026-01-03 02:32:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:18.550881 | orchestrator | 2026-01-03 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:21.593607 | orchestrator | 2026-01-03 02:32:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:21.597039 | orchestrator | 2026-01-03 02:32:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:21.597129 | orchestrator | 2026-01-03 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:24.650985 | orchestrator | 2026-01-03 02:32:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:24.653202 | orchestrator | 2026-01-03 02:32:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:24.653432 | orchestrator | 2026-01-03 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:27.700069 | orchestrator | 2026-01-03 02:32:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:27.702211 | orchestrator | 2026-01-03 02:32:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:27.702274 | orchestrator | 2026-01-03 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:30.745422 | orchestrator | 2026-01-03 02:32:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:30.747831 | orchestrator | 2026-01-03 02:32:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:30.747972 | orchestrator | 2026-01-03 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:33.794165 | orchestrator | 2026-01-03 02:32:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:33.794910 | orchestrator | 2026-01-03 02:32:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:33.794939 | orchestrator | 2026-01-03 02:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:36.839689 | orchestrator | 2026-01-03 02:32:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:36.840651 | orchestrator | 2026-01-03 02:32:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:36.840694 | orchestrator | 2026-01-03 02:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:39.880487 | orchestrator | 2026-01-03 02:32:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:39.882222 | orchestrator | 2026-01-03 02:32:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:39.882275 | orchestrator | 2026-01-03 02:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:42.927807 | orchestrator | 2026-01-03 02:32:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:42.929405 | orchestrator | 2026-01-03 02:32:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:42.929443 | orchestrator | 2026-01-03 02:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:45.972854 | orchestrator | 2026-01-03 02:32:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:45.974582 | orchestrator | 2026-01-03 02:32:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:45.974632 | orchestrator | 2026-01-03 02:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:49.024141 | orchestrator | 2026-01-03 02:32:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:49.025696 | orchestrator | 2026-01-03 02:32:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:49.025920 | orchestrator | 2026-01-03 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:52.073851 | orchestrator | 2026-01-03 02:32:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:52.075103 | orchestrator | 2026-01-03 02:32:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:52.075140 | orchestrator | 2026-01-03 02:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:55.118975 | orchestrator | 2026-01-03 02:32:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:55.119559 | orchestrator | 2026-01-03 02:32:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:55.119606 | orchestrator | 2026-01-03 02:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:58.165149 | orchestrator | 2026-01-03 02:32:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:32:58.167505 | orchestrator | 2026-01-03 02:32:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:32:58.167757 | orchestrator | 2026-01-03 02:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:01.210801 | orchestrator | 2026-01-03 02:33:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:01.212353 | orchestrator | 2026-01-03 02:33:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:01.212458 | orchestrator | 2026-01-03 02:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:04.258578 | orchestrator | 2026-01-03 02:33:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:04.259597 | orchestrator | 2026-01-03 02:33:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:04.259621 | orchestrator | 2026-01-03 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:07.308071 | orchestrator | 2026-01-03 02:33:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:07.310449 | orchestrator | 2026-01-03 02:33:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:07.310579 | orchestrator | 2026-01-03 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:10.360019 | orchestrator | 2026-01-03 02:33:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:10.362244 | orchestrator | 2026-01-03 02:33:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:10.362296 | orchestrator | 2026-01-03 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:13.402518 | orchestrator | 2026-01-03 02:33:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:13.404280 | orchestrator | 2026-01-03 02:33:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:13.404534 | orchestrator | 2026-01-03 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:16.447885 | orchestrator | 2026-01-03 02:33:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:16.449826 | orchestrator | 2026-01-03 02:33:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:16.449888 | orchestrator | 2026-01-03 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:19.497597 | orchestrator | 2026-01-03 02:33:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:19.499674 | orchestrator | 2026-01-03 02:33:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:19.499709 | orchestrator | 2026-01-03 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:22.546108 | orchestrator | 2026-01-03 02:33:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:22.548382 | orchestrator | 2026-01-03 02:33:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:22.548475 | orchestrator | 2026-01-03 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:25.595793 | orchestrator | 2026-01-03 02:33:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:25.598401 | orchestrator | 2026-01-03 02:33:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:25.598513 | orchestrator | 2026-01-03 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:28.645574 | orchestrator | 2026-01-03 02:33:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:28.647393 | orchestrator | 2026-01-03 02:33:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:28.647451 | orchestrator | 2026-01-03 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:31.692960 | orchestrator | 2026-01-03 02:33:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:31.694517 | orchestrator | 2026-01-03 02:33:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:31.694542 | orchestrator | 2026-01-03 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:34.739502 | orchestrator | 2026-01-03 02:33:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:34.741502 | orchestrator | 2026-01-03 02:33:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:34.741552 | orchestrator | 2026-01-03 02:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:37.782570 | orchestrator | 2026-01-03 02:33:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:37.784437 | orchestrator | 2026-01-03 02:33:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:37.784510 | orchestrator | 2026-01-03 02:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:40.829773 | orchestrator | 2026-01-03 02:33:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:40.832627 | orchestrator | 2026-01-03 02:33:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:40.832728 | orchestrator | 2026-01-03 02:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:43.879665 | orchestrator | 2026-01-03 02:33:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:43.880622 | orchestrator | 2026-01-03 02:33:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:43.880676 | orchestrator | 2026-01-03 02:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:46.923387 | orchestrator | 2026-01-03 02:33:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:46.924998 | orchestrator | 2026-01-03 02:33:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:46.925032 | orchestrator | 2026-01-03 02:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:49.969902 | orchestrator | 2026-01-03 02:33:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:49.973164 | orchestrator | 2026-01-03 02:33:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:49.973228 | orchestrator | 2026-01-03 02:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:53.021088 | orchestrator | 2026-01-03 02:33:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:53.022922 | orchestrator | 2026-01-03 02:33:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:53.023003 | orchestrator | 2026-01-03 02:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:56.071792 | orchestrator | 2026-01-03 02:33:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:56.075212 | orchestrator | 2026-01-03 02:33:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:56.075363 | orchestrator | 2026-01-03 02:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:59.128228 | orchestrator | 2026-01-03 02:33:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:33:59.129474 | orchestrator | 2026-01-03 02:33:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:33:59.129518 | orchestrator | 2026-01-03 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:02.174798 | orchestrator | 2026-01-03 02:34:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:02.175983 | orchestrator | 2026-01-03 02:34:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:02.176028 | orchestrator | 2026-01-03 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:05.220646 | orchestrator | 2026-01-03 02:34:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:05.222430 | orchestrator | 2026-01-03 02:34:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:05.222494 | orchestrator | 2026-01-03 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:08.270554 | orchestrator | 2026-01-03 02:34:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:08.271957 | orchestrator | 2026-01-03 02:34:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:08.272018 | orchestrator | 2026-01-03 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:11.317219 | orchestrator | 2026-01-03 02:34:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:11.319844 | orchestrator | 2026-01-03 02:34:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:11.319913 | orchestrator | 2026-01-03 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:14.363182 | orchestrator | 2026-01-03 02:34:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:14.364882 | orchestrator | 2026-01-03 02:34:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:14.364952 | orchestrator | 2026-01-03 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:17.411939 | orchestrator | 2026-01-03 02:34:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:17.413273 | orchestrator | 2026-01-03 02:34:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:17.413349 | orchestrator | 2026-01-03 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:20.460259 | orchestrator | 2026-01-03 02:34:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:20.461853 | orchestrator | 2026-01-03 02:34:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:20.461885 | orchestrator | 2026-01-03 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:23.511353 | orchestrator | 2026-01-03 02:34:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:23.512794 | orchestrator | 2026-01-03 02:34:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:23.512907 | orchestrator | 2026-01-03 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:26.555856 | orchestrator | 2026-01-03 02:34:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:26.557038 | orchestrator | 2026-01-03 02:34:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:26.557079 | orchestrator | 2026-01-03 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:29.600220 | orchestrator | 2026-01-03 02:34:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:29.600938 | orchestrator | 2026-01-03 02:34:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:29.601140 | orchestrator | 2026-01-03 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:32.644993 | orchestrator | 2026-01-03 02:34:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:32.646803 | orchestrator | 2026-01-03 02:34:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:32.646895 | orchestrator | 2026-01-03 02:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:35.692475 | orchestrator | 2026-01-03 02:34:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:35.694281 | orchestrator | 2026-01-03 02:34:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:35.694390 | orchestrator | 2026-01-03 02:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:38.741452 | orchestrator | 2026-01-03 02:34:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:38.743071 | orchestrator | 2026-01-03 02:34:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:38.743115 | orchestrator | 2026-01-03 02:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:41.785018 | orchestrator | 2026-01-03 02:34:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:41.786507 | orchestrator | 2026-01-03 02:34:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:41.786548 | orchestrator | 2026-01-03 02:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:44.833901 | orchestrator | 2026-01-03 02:34:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:44.835643 | orchestrator | 2026-01-03 02:34:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:44.835883 | orchestrator | 2026-01-03 02:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:47.879540 | orchestrator | 2026-01-03 02:34:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:47.881580 | orchestrator | 2026-01-03 02:34:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:47.881642 | orchestrator | 2026-01-03 02:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:50.925201 | orchestrator | 2026-01-03 02:34:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:50.927181 | orchestrator | 2026-01-03 02:34:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:50.927248 | orchestrator | 2026-01-03 02:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:53.972144 | orchestrator | 2026-01-03 02:34:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:53.973833 | orchestrator | 2026-01-03 02:34:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:53.973936 | orchestrator | 2026-01-03 02:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:57.012366 | orchestrator | 2026-01-03 02:34:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:34:57.013772 | orchestrator | 2026-01-03 02:34:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:34:57.013828 | orchestrator | 2026-01-03 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:00.048424 | orchestrator | 2026-01-03 02:35:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:00.050394 | orchestrator | 2026-01-03 02:35:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:00.050629 | orchestrator | 2026-01-03 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:03.090928 | orchestrator | 2026-01-03 02:35:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:03.091732 | orchestrator | 2026-01-03 02:35:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:03.091837 | orchestrator | 2026-01-03 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:06.137374 | orchestrator | 2026-01-03 02:35:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:06.138832 | orchestrator | 2026-01-03 02:35:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:06.138925 | orchestrator | 2026-01-03 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:09.186984 | orchestrator | 2026-01-03 02:35:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:09.188809 | orchestrator | 2026-01-03 02:35:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:09.188879 | orchestrator | 2026-01-03 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:12.239070 | orchestrator | 2026-01-03 02:35:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:12.241531 | orchestrator | 2026-01-03 02:35:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:12.241590 | orchestrator | 2026-01-03 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:15.284393 | orchestrator | 2026-01-03 02:35:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:15.285797 | orchestrator | 2026-01-03 02:35:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:15.285853 | orchestrator | 2026-01-03 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:18.331271 | orchestrator | 2026-01-03 02:35:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:18.332897 | orchestrator | 2026-01-03 02:35:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:18.332938 | orchestrator | 2026-01-03 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:21.378269 | orchestrator | 2026-01-03 02:35:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:21.380557 | orchestrator | 2026-01-03 02:35:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:21.380650 | orchestrator | 2026-01-03 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:24.426467 | orchestrator | 2026-01-03 02:35:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:24.428216 | orchestrator | 2026-01-03 02:35:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:24.428272 | orchestrator | 2026-01-03 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:27.476427 | orchestrator | 2026-01-03 02:35:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:27.478548 | orchestrator | 2026-01-03 02:35:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:27.478618 | orchestrator | 2026-01-03 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:30.522658 | orchestrator | 2026-01-03 02:35:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:30.525219 | orchestrator | 2026-01-03 02:35:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:30.525342 | orchestrator | 2026-01-03 02:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:33.566615 | orchestrator | 2026-01-03 02:35:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:33.567631 | orchestrator | 2026-01-03 02:35:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:33.567660 | orchestrator | 2026-01-03 02:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:36.610943 | orchestrator | 2026-01-03 02:35:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:36.612572 | orchestrator | 2026-01-03 02:35:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:36.612632 | orchestrator | 2026-01-03 02:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:39.665055 | orchestrator | 2026-01-03 02:35:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:39.666918 | orchestrator | 2026-01-03 02:35:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:39.666985 | orchestrator | 2026-01-03 02:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:42.715455 | orchestrator | 2026-01-03 02:35:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:42.716355 | orchestrator | 2026-01-03 02:35:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:42.716412 | orchestrator | 2026-01-03 02:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:45.758905 | orchestrator | 2026-01-03 02:35:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:45.761097 | orchestrator | 2026-01-03 02:35:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:45.761209 | orchestrator | 2026-01-03 02:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:48.811328 | orchestrator | 2026-01-03 02:35:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:48.813732 | orchestrator | 2026-01-03 02:35:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:48.813807 | orchestrator | 2026-01-03 02:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:51.858560 | orchestrator | 2026-01-03 02:35:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:51.860734 | orchestrator | 2026-01-03 02:35:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:51.860780 | orchestrator | 2026-01-03 02:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:54.906753 | orchestrator | 2026-01-03 02:35:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:54.909505 | orchestrator | 2026-01-03 02:35:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:54.909571 | orchestrator | 2026-01-03 02:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:57.950454 | orchestrator | 2026-01-03 02:35:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:35:57.951709 | orchestrator | 2026-01-03 02:35:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:35:57.951772 | orchestrator | 2026-01-03 02:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:01.000880 | orchestrator | 2026-01-03 02:36:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:01.002239 | orchestrator | 2026-01-03 02:36:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:01.002329 | orchestrator | 2026-01-03 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:04.044694 | orchestrator | 2026-01-03 02:36:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:04.046894 | orchestrator | 2026-01-03 02:36:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:04.046973 | orchestrator | 2026-01-03 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:07.100881 | orchestrator | 2026-01-03 02:36:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:07.103034 | orchestrator | 2026-01-03 02:36:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:07.103184 | orchestrator | 2026-01-03 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:10.144478 | orchestrator | 2026-01-03 02:36:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:10.145794 | orchestrator | 2026-01-03 02:36:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:10.145856 | orchestrator | 2026-01-03 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:13.188568 | orchestrator | 2026-01-03 02:36:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:13.190709 | orchestrator | 2026-01-03 02:36:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:13.190853 | orchestrator | 2026-01-03 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:16.230246 | orchestrator | 2026-01-03 02:36:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:16.231812 | orchestrator | 2026-01-03 02:36:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:16.231861 | orchestrator | 2026-01-03 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:19.269598 | orchestrator | 2026-01-03 02:36:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:19.270583 | orchestrator | 2026-01-03 02:36:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:19.270681 | orchestrator | 2026-01-03 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:22.314335 | orchestrator | 2026-01-03 02:36:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:22.315727 | orchestrator | 2026-01-03 02:36:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:22.315846 | orchestrator | 2026-01-03 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:25.362284 | orchestrator | 2026-01-03 02:36:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:25.362974 | orchestrator | 2026-01-03 02:36:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:25.363012 | orchestrator | 2026-01-03 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:28.412992 | orchestrator | 2026-01-03 02:36:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:28.414914 | orchestrator | 2026-01-03 02:36:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:28.414964 | orchestrator | 2026-01-03 02:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:31.463499 | orchestrator | 2026-01-03 02:36:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:31.467251 | orchestrator | 2026-01-03 02:36:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:31.467345 | orchestrator | 2026-01-03 02:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:34.516016 | orchestrator | 2026-01-03 02:36:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:34.519836 | orchestrator | 2026-01-03 02:36:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:34.519928 | orchestrator | 2026-01-03 02:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:37.569406 | orchestrator | 2026-01-03 02:36:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:37.572330 | orchestrator | 2026-01-03 02:36:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:37.572442 | orchestrator | 2026-01-03 02:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:40.620585 | orchestrator | 2026-01-03 02:36:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:40.623074 | orchestrator | 2026-01-03 02:36:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:40.623174 | orchestrator | 2026-01-03 02:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:43.666649 | orchestrator | 2026-01-03 02:36:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:43.668665 | orchestrator | 2026-01-03 02:36:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:43.668784 | orchestrator | 2026-01-03 02:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:46.714682 | orchestrator | 2026-01-03 02:36:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:46.717055 | orchestrator | 2026-01-03 02:36:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:46.717170 | orchestrator | 2026-01-03 02:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:49.764281 | orchestrator | 2026-01-03 02:36:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:49.766811 | orchestrator | 2026-01-03 02:36:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:49.766890 | orchestrator | 2026-01-03 02:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:52.807344 | orchestrator | 2026-01-03 02:36:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:52.809710 | orchestrator | 2026-01-03 02:36:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:52.809765 | orchestrator | 2026-01-03 02:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:55.857886 | orchestrator | 2026-01-03 02:36:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:55.859585 | orchestrator | 2026-01-03 02:36:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:55.859683 | orchestrator | 2026-01-03 02:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:58.904977 | orchestrator | 2026-01-03 02:36:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:36:58.907217 | orchestrator | 2026-01-03 02:36:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:36:58.907650 | orchestrator | 2026-01-03 02:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:01.956050 | orchestrator | 2026-01-03 02:37:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:01.958237 | orchestrator | 2026-01-03 02:37:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:01.958348 | orchestrator | 2026-01-03 02:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:05.003750 | orchestrator | 2026-01-03 02:37:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:05.006579 | orchestrator | 2026-01-03 02:37:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:05.006652 | orchestrator | 2026-01-03 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:08.049428 | orchestrator | 2026-01-03 02:37:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:08.051255 | orchestrator | 2026-01-03 02:37:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:08.051354 | orchestrator | 2026-01-03 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:11.098525 | orchestrator | 2026-01-03 02:37:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:11.098623 | orchestrator | 2026-01-03 02:37:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:11.098636 | orchestrator | 2026-01-03 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:14.144628 | orchestrator | 2026-01-03 02:37:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:14.147942 | orchestrator | 2026-01-03 02:37:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:14.148027 | orchestrator | 2026-01-03 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:17.194369 | orchestrator | 2026-01-03 02:37:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:17.195699 | orchestrator | 2026-01-03 02:37:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:17.195905 | orchestrator | 2026-01-03 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:20.245714 | orchestrator | 2026-01-03 02:37:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:20.247107 | orchestrator | 2026-01-03 02:37:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:20.247819 | orchestrator | 2026-01-03 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:23.293607 | orchestrator | 2026-01-03 02:37:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:23.295089 | orchestrator | 2026-01-03 02:37:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:23.295137 | orchestrator | 2026-01-03 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:26.340407 | orchestrator | 2026-01-03 02:37:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:26.342133 | orchestrator | 2026-01-03 02:37:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:26.342198 | orchestrator | 2026-01-03 02:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:29.381482 | orchestrator | 2026-01-03 02:37:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:29.382325 | orchestrator | 2026-01-03 02:37:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:29.382364 | orchestrator | 2026-01-03 02:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:32.428439 | orchestrator | 2026-01-03 02:37:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:32.431065 | orchestrator | 2026-01-03 02:37:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:32.431123 | orchestrator | 2026-01-03 02:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:35.475538 | orchestrator | 2026-01-03 02:37:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:35.477191 | orchestrator | 2026-01-03 02:37:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:35.477244 | orchestrator | 2026-01-03 02:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:38.527039 | orchestrator | 2026-01-03 02:37:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:38.528249 | orchestrator | 2026-01-03 02:37:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:38.528399 | orchestrator | 2026-01-03 02:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:41.578495 | orchestrator | 2026-01-03 02:37:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:41.580151 | orchestrator | 2026-01-03 02:37:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:41.580213 | orchestrator | 2026-01-03 02:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:44.629983 | orchestrator | 2026-01-03 02:37:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:44.631400 | orchestrator | 2026-01-03 02:37:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:44.631527 | orchestrator | 2026-01-03 02:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:47.680587 | orchestrator | 2026-01-03 02:37:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:47.683714 | orchestrator | 2026-01-03 02:37:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:47.683772 | orchestrator | 2026-01-03 02:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:50.731580 | orchestrator | 2026-01-03 02:37:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:50.732804 | orchestrator | 2026-01-03 02:37:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:50.732863 | orchestrator | 2026-01-03 02:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:53.775802 | orchestrator | 2026-01-03 02:37:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:53.777831 | orchestrator | 2026-01-03 02:37:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:53.777906 | orchestrator | 2026-01-03 02:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:56.828864 | orchestrator | 2026-01-03 02:37:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:56.830490 | orchestrator | 2026-01-03 02:37:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:56.830603 | orchestrator | 2026-01-03 02:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:59.875800 | orchestrator | 2026-01-03 02:37:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:37:59.877852 | orchestrator | 2026-01-03 02:37:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:37:59.877927 | orchestrator | 2026-01-03 02:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:02.921139 | orchestrator | 2026-01-03 02:38:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:02.922500 | orchestrator | 2026-01-03 02:38:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:02.922828 | orchestrator | 2026-01-03 02:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:05.970265 | orchestrator | 2026-01-03 02:38:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:05.970968 | orchestrator | 2026-01-03 02:38:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:05.971011 | orchestrator | 2026-01-03 02:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:09.022957 | orchestrator | 2026-01-03 02:38:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:09.025161 | orchestrator | 2026-01-03 02:38:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:09.025221 | orchestrator | 2026-01-03 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:12.066967 | orchestrator | 2026-01-03 02:38:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:12.067952 | orchestrator | 2026-01-03 02:38:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:12.067995 | orchestrator | 2026-01-03 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:15.114051 | orchestrator | 2026-01-03 02:38:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:15.116311 | orchestrator | 2026-01-03 02:38:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:15.116545 | orchestrator | 2026-01-03 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:18.163057 | orchestrator | 2026-01-03 02:38:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:18.165622 | orchestrator | 2026-01-03 02:38:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:18.165801 | orchestrator | 2026-01-03 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:21.211262 | orchestrator | 2026-01-03 02:38:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:21.212315 | orchestrator | 2026-01-03 02:38:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:21.212365 | orchestrator | 2026-01-03 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:24.258118 | orchestrator | 2026-01-03 02:38:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:24.259084 | orchestrator | 2026-01-03 02:38:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:24.259109 | orchestrator | 2026-01-03 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:27.304996 | orchestrator | 2026-01-03 02:38:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:27.306783 | orchestrator | 2026-01-03 02:38:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:27.306909 | orchestrator | 2026-01-03 02:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:30.350750 | orchestrator | 2026-01-03 02:38:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:30.351687 | orchestrator | 2026-01-03 02:38:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:30.351737 | orchestrator | 2026-01-03 02:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:33.394482 | orchestrator | 2026-01-03 02:38:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:33.395254 | orchestrator | 2026-01-03 02:38:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:33.395485 | orchestrator | 2026-01-03 02:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:36.439677 | orchestrator | 2026-01-03 02:38:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:36.440913 | orchestrator | 2026-01-03 02:38:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:36.440995 | orchestrator | 2026-01-03 02:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:39.487159 | orchestrator | 2026-01-03 02:38:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:39.489112 | orchestrator | 2026-01-03 02:38:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:39.489242 | orchestrator | 2026-01-03 02:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:42.536782 | orchestrator | 2026-01-03 02:38:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:42.538758 | orchestrator | 2026-01-03 02:38:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:42.539181 | orchestrator | 2026-01-03 02:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:45.585421 | orchestrator | 2026-01-03 02:38:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:45.587646 | orchestrator | 2026-01-03 02:38:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:45.587707 | orchestrator | 2026-01-03 02:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:48.635524 | orchestrator | 2026-01-03 02:38:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:48.637991 | orchestrator | 2026-01-03 02:38:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:48.638108 | orchestrator | 2026-01-03 02:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:51.682270 | orchestrator | 2026-01-03 02:38:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:51.684334 | orchestrator | 2026-01-03 02:38:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:51.684406 | orchestrator | 2026-01-03 02:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:54.724342 | orchestrator | 2026-01-03 02:38:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:54.725737 | orchestrator | 2026-01-03 02:38:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:54.725824 | orchestrator | 2026-01-03 02:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:57.770061 | orchestrator | 2026-01-03 02:38:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:38:57.772115 | orchestrator | 2026-01-03 02:38:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:38:57.772160 | orchestrator | 2026-01-03 02:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:00.813222 | orchestrator | 2026-01-03 02:39:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:00.814275 | orchestrator | 2026-01-03 02:39:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:00.814320 | orchestrator | 2026-01-03 02:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:03.862113 | orchestrator | 2026-01-03 02:39:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:03.864876 | orchestrator | 2026-01-03 02:39:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:03.864978 | orchestrator | 2026-01-03 02:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:06.917856 | orchestrator | 2026-01-03 02:39:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:06.920263 | orchestrator | 2026-01-03 02:39:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:06.920324 | orchestrator | 2026-01-03 02:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:09.969757 | orchestrator | 2026-01-03 02:39:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:09.971673 | orchestrator | 2026-01-03 02:39:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:09.971733 | orchestrator | 2026-01-03 02:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:13.019587 | orchestrator | 2026-01-03 02:39:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:13.020939 | orchestrator | 2026-01-03 02:39:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:13.020992 | orchestrator | 2026-01-03 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:16.068821 | orchestrator | 2026-01-03 02:39:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:16.070932 | orchestrator | 2026-01-03 02:39:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:16.070973 | orchestrator | 2026-01-03 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:19.115440 | orchestrator | 2026-01-03 02:39:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:19.117745 | orchestrator | 2026-01-03 02:39:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:19.117826 | orchestrator | 2026-01-03 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:22.167499 | orchestrator | 2026-01-03 02:39:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:22.169310 | orchestrator | 2026-01-03 02:39:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:22.169374 | orchestrator | 2026-01-03 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:25.212172 | orchestrator | 2026-01-03 02:39:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:25.213362 | orchestrator | 2026-01-03 02:39:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:25.213499 | orchestrator | 2026-01-03 02:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:28.261191 | orchestrator | 2026-01-03 02:39:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:28.262888 | orchestrator | 2026-01-03 02:39:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:28.262941 | orchestrator | 2026-01-03 02:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:31.305659 | orchestrator | 2026-01-03 02:39:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:31.307650 | orchestrator | 2026-01-03 02:39:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:31.307712 | orchestrator | 2026-01-03 02:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:34.354291 | orchestrator | 2026-01-03 02:39:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:34.356306 | orchestrator | 2026-01-03 02:39:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:34.356366 | orchestrator | 2026-01-03 02:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:37.397487 | orchestrator | 2026-01-03 02:39:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:37.399791 | orchestrator | 2026-01-03 02:39:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:37.399856 | orchestrator | 2026-01-03 02:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:40.444331 | orchestrator | 2026-01-03 02:39:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:40.445685 | orchestrator | 2026-01-03 02:39:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:40.445736 | orchestrator | 2026-01-03 02:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:43.491629 | orchestrator | 2026-01-03 02:39:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:43.492679 | orchestrator | 2026-01-03 02:39:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:43.492714 | orchestrator | 2026-01-03 02:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:46.539990 | orchestrator | 2026-01-03 02:39:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:46.541364 | orchestrator | 2026-01-03 02:39:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:46.660161 | orchestrator | 2026-01-03 02:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:49.585662 | orchestrator | 2026-01-03 02:39:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:49.587900 | orchestrator | 2026-01-03 02:39:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:49.587981 | orchestrator | 2026-01-03 02:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:52.636220 | orchestrator | 2026-01-03 02:39:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:52.637696 | orchestrator | 2026-01-03 02:39:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:52.637741 | orchestrator | 2026-01-03 02:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:55.683477 | orchestrator | 2026-01-03 02:39:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:55.684954 | orchestrator | 2026-01-03 02:39:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:55.685104 | orchestrator | 2026-01-03 02:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:58.731482 | orchestrator | 2026-01-03 02:39:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:39:58.733966 | orchestrator | 2026-01-03 02:39:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:39:58.734267 | orchestrator | 2026-01-03 02:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:01.778738 | orchestrator | 2026-01-03 02:40:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:01.780215 | orchestrator | 2026-01-03 02:40:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:01.780311 | orchestrator | 2026-01-03 02:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:04.829284 | orchestrator | 2026-01-03 02:40:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:04.831838 | orchestrator | 2026-01-03 02:40:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:04.831936 | orchestrator | 2026-01-03 02:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:07.878365 | orchestrator | 2026-01-03 02:40:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:07.879908 | orchestrator | 2026-01-03 02:40:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:07.880389 | orchestrator | 2026-01-03 02:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:10.928026 | orchestrator | 2026-01-03 02:40:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:10.930605 | orchestrator | 2026-01-03 02:40:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:10.930692 | orchestrator | 2026-01-03 02:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:13.977587 | orchestrator | 2026-01-03 02:40:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:13.980251 | orchestrator | 2026-01-03 02:40:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:13.980335 | orchestrator | 2026-01-03 02:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:17.022406 | orchestrator | 2026-01-03 02:40:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:17.024420 | orchestrator | 2026-01-03 02:40:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:17.024486 | orchestrator | 2026-01-03 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:20.065252 | orchestrator | 2026-01-03 02:40:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:20.067085 | orchestrator | 2026-01-03 02:40:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:20.067131 | orchestrator | 2026-01-03 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:23.110963 | orchestrator | 2026-01-03 02:40:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:23.112769 | orchestrator | 2026-01-03 02:40:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:23.112828 | orchestrator | 2026-01-03 02:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:26.160472 | orchestrator | 2026-01-03 02:40:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:26.163310 | orchestrator | 2026-01-03 02:40:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:26.163397 | orchestrator | 2026-01-03 02:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:29.216037 | orchestrator | 2026-01-03 02:40:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:29.218010 | orchestrator | 2026-01-03 02:40:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:29.218198 | orchestrator | 2026-01-03 02:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:32.258612 | orchestrator | 2026-01-03 02:40:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:32.260815 | orchestrator | 2026-01-03 02:40:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:32.260881 | orchestrator | 2026-01-03 02:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:35.305787 | orchestrator | 2026-01-03 02:40:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:35.306238 | orchestrator | 2026-01-03 02:40:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:35.306267 | orchestrator | 2026-01-03 02:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:38.354515 | orchestrator | 2026-01-03 02:40:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:38.357617 | orchestrator | 2026-01-03 02:40:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:38.357746 | orchestrator | 2026-01-03 02:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:41.404448 | orchestrator | 2026-01-03 02:40:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:41.406539 | orchestrator | 2026-01-03 02:40:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:41.406641 | orchestrator | 2026-01-03 02:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:44.448301 | orchestrator | 2026-01-03 02:40:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:44.450318 | orchestrator | 2026-01-03 02:40:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:44.450392 | orchestrator | 2026-01-03 02:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:47.492690 | orchestrator | 2026-01-03 02:40:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:47.494121 | orchestrator | 2026-01-03 02:40:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:47.494207 | orchestrator | 2026-01-03 02:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:50.538096 | orchestrator | 2026-01-03 02:40:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:50.539546 | orchestrator | 2026-01-03 02:40:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:50.539631 | orchestrator | 2026-01-03 02:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:53.590400 | orchestrator | 2026-01-03 02:40:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:53.591884 | orchestrator | 2026-01-03 02:40:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:53.592046 | orchestrator | 2026-01-03 02:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:56.635933 | orchestrator | 2026-01-03 02:40:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:56.636985 | orchestrator | 2026-01-03 02:40:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:56.637172 | orchestrator | 2026-01-03 02:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:59.679985 | orchestrator | 2026-01-03 02:40:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:40:59.681840 | orchestrator | 2026-01-03 02:40:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:40:59.681898 | orchestrator | 2026-01-03 02:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:02.730221 | orchestrator | 2026-01-03 02:41:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:02.731744 | orchestrator | 2026-01-03 02:41:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:02.731800 | orchestrator | 2026-01-03 02:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:05.776116 | orchestrator | 2026-01-03 02:41:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:05.778166 | orchestrator | 2026-01-03 02:41:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:05.778212 | orchestrator | 2026-01-03 02:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:08.822992 | orchestrator | 2026-01-03 02:41:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:08.825367 | orchestrator | 2026-01-03 02:41:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:08.825443 | orchestrator | 2026-01-03 02:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:11.867989 | orchestrator | 2026-01-03 02:41:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:11.869546 | orchestrator | 2026-01-03 02:41:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:11.869585 | orchestrator | 2026-01-03 02:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:14.916325 | orchestrator | 2026-01-03 02:41:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:14.918844 | orchestrator | 2026-01-03 02:41:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:14.918882 | orchestrator | 2026-01-03 02:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:17.964739 | orchestrator | 2026-01-03 02:41:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:17.965382 | orchestrator | 2026-01-03 02:41:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:17.965490 | orchestrator | 2026-01-03 02:41:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:21.013433 | orchestrator | 2026-01-03 02:41:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:21.015836 | orchestrator | 2026-01-03 02:41:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:21.015894 | orchestrator | 2026-01-03 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:24.058823 | orchestrator | 2026-01-03 02:41:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:24.060148 | orchestrator | 2026-01-03 02:41:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:24.060178 | orchestrator | 2026-01-03 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:27.108017 | orchestrator | 2026-01-03 02:41:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:27.110422 | orchestrator | 2026-01-03 02:41:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:27.110507 | orchestrator | 2026-01-03 02:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:30.157585 | orchestrator | 2026-01-03 02:41:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:30.159697 | orchestrator | 2026-01-03 02:41:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:30.159769 | orchestrator | 2026-01-03 02:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:33.200743 | orchestrator | 2026-01-03 02:41:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:33.202466 | orchestrator | 2026-01-03 02:41:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:33.202567 | orchestrator | 2026-01-03 02:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:36.243427 | orchestrator | 2026-01-03 02:41:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:36.245119 | orchestrator | 2026-01-03 02:41:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:36.245361 | orchestrator | 2026-01-03 02:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:39.293092 | orchestrator | 2026-01-03 02:41:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:39.294207 | orchestrator | 2026-01-03 02:41:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:39.294300 | orchestrator | 2026-01-03 02:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:42.339677 | orchestrator | 2026-01-03 02:41:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:42.343155 | orchestrator | 2026-01-03 02:41:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:42.343243 | orchestrator | 2026-01-03 02:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:45.394110 | orchestrator | 2026-01-03 02:41:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:45.395022 | orchestrator | 2026-01-03 02:41:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:45.395069 | orchestrator | 2026-01-03 02:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:48.443834 | orchestrator | 2026-01-03 02:41:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:48.446784 | orchestrator | 2026-01-03 02:41:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:48.446852 | orchestrator | 2026-01-03 02:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:51.496410 | orchestrator | 2026-01-03 02:41:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:51.497877 | orchestrator | 2026-01-03 02:41:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:51.498005 | orchestrator | 2026-01-03 02:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:54.548891 | orchestrator | 2026-01-03 02:41:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:54.551925 | orchestrator | 2026-01-03 02:41:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:54.552015 | orchestrator | 2026-01-03 02:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:57.600152 | orchestrator | 2026-01-03 02:41:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:41:57.602177 | orchestrator | 2026-01-03 02:41:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:41:57.602233 | orchestrator | 2026-01-03 02:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:00.651545 | orchestrator | 2026-01-03 02:42:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:00.652628 | orchestrator | 2026-01-03 02:42:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:00.652690 | orchestrator | 2026-01-03 02:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:03.696514 | orchestrator | 2026-01-03 02:42:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:03.699084 | orchestrator | 2026-01-03 02:42:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:03.699161 | orchestrator | 2026-01-03 02:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:06.742136 | orchestrator | 2026-01-03 02:42:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:06.742646 | orchestrator | 2026-01-03 02:42:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:06.742673 | orchestrator | 2026-01-03 02:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:09.792066 | orchestrator | 2026-01-03 02:42:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:09.794092 | orchestrator | 2026-01-03 02:42:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:09.794146 | orchestrator | 2026-01-03 02:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:12.835495 | orchestrator | 2026-01-03 02:42:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:12.836518 | orchestrator | 2026-01-03 02:42:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:12.836740 | orchestrator | 2026-01-03 02:42:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:15.881136 | orchestrator | 2026-01-03 02:42:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:15.882078 | orchestrator | 2026-01-03 02:42:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:15.882141 | orchestrator | 2026-01-03 02:42:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:18.926843 | orchestrator | 2026-01-03 02:42:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:18.928409 | orchestrator | 2026-01-03 02:42:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:18.928460 | orchestrator | 2026-01-03 02:42:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:21.971773 | orchestrator | 2026-01-03 02:42:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:21.974314 | orchestrator | 2026-01-03 02:42:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:21.974386 | orchestrator | 2026-01-03 02:42:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:25.021547 | orchestrator | 2026-01-03 02:42:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:25.023716 | orchestrator | 2026-01-03 02:42:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:25.023853 | orchestrator | 2026-01-03 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:28.067928 | orchestrator | 2026-01-03 02:42:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:28.069898 | orchestrator | 2026-01-03 02:42:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:28.069977 | orchestrator | 2026-01-03 02:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:31.115653 | orchestrator | 2026-01-03 02:42:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:31.118357 | orchestrator | 2026-01-03 02:42:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:31.118421 | orchestrator | 2026-01-03 02:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:34.163250 | orchestrator | 2026-01-03 02:42:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:34.165733 | orchestrator | 2026-01-03 02:42:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:34.165934 | orchestrator | 2026-01-03 02:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:37.214964 | orchestrator | 2026-01-03 02:42:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:37.217046 | orchestrator | 2026-01-03 02:42:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:37.217097 | orchestrator | 2026-01-03 02:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:40.261684 | orchestrator | 2026-01-03 02:42:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:40.263221 | orchestrator | 2026-01-03 02:42:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:40.263591 | orchestrator | 2026-01-03 02:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:43.310187 | orchestrator | 2026-01-03 02:42:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:43.311583 | orchestrator | 2026-01-03 02:42:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:43.311998 | orchestrator | 2026-01-03 02:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:46.352281 | orchestrator | 2026-01-03 02:42:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:46.354008 | orchestrator | 2026-01-03 02:42:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:46.354151 | orchestrator | 2026-01-03 02:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:49.398643 | orchestrator | 2026-01-03 02:42:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:49.400530 | orchestrator | 2026-01-03 02:42:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:49.400665 | orchestrator | 2026-01-03 02:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:52.442366 | orchestrator | 2026-01-03 02:42:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:52.443976 | orchestrator | 2026-01-03 02:42:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:52.444010 | orchestrator | 2026-01-03 02:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:55.490872 | orchestrator | 2026-01-03 02:42:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:55.491688 | orchestrator | 2026-01-03 02:42:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:55.491744 | orchestrator | 2026-01-03 02:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:58.541723 | orchestrator | 2026-01-03 02:42:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:42:58.543394 | orchestrator | 2026-01-03 02:42:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:42:58.543442 | orchestrator | 2026-01-03 02:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:01.593181 | orchestrator | 2026-01-03 02:43:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:01.595010 | orchestrator | 2026-01-03 02:43:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:01.595068 | orchestrator | 2026-01-03 02:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:04.647124 | orchestrator | 2026-01-03 02:43:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:04.648469 | orchestrator | 2026-01-03 02:43:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:04.648509 | orchestrator | 2026-01-03 02:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:07.689650 | orchestrator | 2026-01-03 02:43:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:07.692046 | orchestrator | 2026-01-03 02:43:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:07.692154 | orchestrator | 2026-01-03 02:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:10.743940 | orchestrator | 2026-01-03 02:43:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:10.748139 | orchestrator | 2026-01-03 02:43:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:10.748240 | orchestrator | 2026-01-03 02:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:13.788771 | orchestrator | 2026-01-03 02:43:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:13.791152 | orchestrator | 2026-01-03 02:43:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:13.791334 | orchestrator | 2026-01-03 02:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:16.837442 | orchestrator | 2026-01-03 02:43:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:16.838437 | orchestrator | 2026-01-03 02:43:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:16.838470 | orchestrator | 2026-01-03 02:43:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:19.884459 | orchestrator | 2026-01-03 02:43:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:19.886339 | orchestrator | 2026-01-03 02:43:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:19.886411 | orchestrator | 2026-01-03 02:43:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:22.933600 | orchestrator | 2026-01-03 02:43:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:22.936041 | orchestrator | 2026-01-03 02:43:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:22.936095 | orchestrator | 2026-01-03 02:43:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:25.981124 | orchestrator | 2026-01-03 02:43:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:25.982644 | orchestrator | 2026-01-03 02:43:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:25.982688 | orchestrator | 2026-01-03 02:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:29.031288 | orchestrator | 2026-01-03 02:43:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:29.032597 | orchestrator | 2026-01-03 02:43:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:29.032696 | orchestrator | 2026-01-03 02:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:32.078113 | orchestrator | 2026-01-03 02:43:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:32.080125 | orchestrator | 2026-01-03 02:43:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:32.080292 | orchestrator | 2026-01-03 02:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:35.121568 | orchestrator | 2026-01-03 02:43:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:35.123739 | orchestrator | 2026-01-03 02:43:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:35.123785 | orchestrator | 2026-01-03 02:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:38.170485 | orchestrator | 2026-01-03 02:43:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:38.172114 | orchestrator | 2026-01-03 02:43:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:38.172341 | orchestrator | 2026-01-03 02:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:41.219275 | orchestrator | 2026-01-03 02:43:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:41.220733 | orchestrator | 2026-01-03 02:43:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:41.220821 | orchestrator | 2026-01-03 02:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:44.263854 | orchestrator | 2026-01-03 02:43:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:44.265589 | orchestrator | 2026-01-03 02:43:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:44.265628 | orchestrator | 2026-01-03 02:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:47.309645 | orchestrator | 2026-01-03 02:43:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:47.311972 | orchestrator | 2026-01-03 02:43:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:47.312037 | orchestrator | 2026-01-03 02:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:50.359064 | orchestrator | 2026-01-03 02:43:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:50.360029 | orchestrator | 2026-01-03 02:43:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:50.360093 | orchestrator | 2026-01-03 02:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:53.407785 | orchestrator | 2026-01-03 02:43:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:53.409241 | orchestrator | 2026-01-03 02:43:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:53.409282 | orchestrator | 2026-01-03 02:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:56.456087 | orchestrator | 2026-01-03 02:43:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:56.457637 | orchestrator | 2026-01-03 02:43:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:56.457710 | orchestrator | 2026-01-03 02:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:59.508727 | orchestrator | 2026-01-03 02:43:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:43:59.510259 | orchestrator | 2026-01-03 02:43:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:43:59.510332 | orchestrator | 2026-01-03 02:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:02.553618 | orchestrator | 2026-01-03 02:44:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:02.554890 | orchestrator | 2026-01-03 02:44:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:02.554937 | orchestrator | 2026-01-03 02:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:05.596305 | orchestrator | 2026-01-03 02:44:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:05.596814 | orchestrator | 2026-01-03 02:44:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:05.597268 | orchestrator | 2026-01-03 02:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:08.645879 | orchestrator | 2026-01-03 02:44:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:08.647744 | orchestrator | 2026-01-03 02:44:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:08.647830 | orchestrator | 2026-01-03 02:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:11.696190 | orchestrator | 2026-01-03 02:44:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:11.697884 | orchestrator | 2026-01-03 02:44:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:11.698076 | orchestrator | 2026-01-03 02:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:14.743075 | orchestrator | 2026-01-03 02:44:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:14.744322 | orchestrator | 2026-01-03 02:44:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:14.744365 | orchestrator | 2026-01-03 02:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:17.793666 | orchestrator | 2026-01-03 02:44:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:17.795210 | orchestrator | 2026-01-03 02:44:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:17.795266 | orchestrator | 2026-01-03 02:44:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:20.839607 | orchestrator | 2026-01-03 02:44:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:20.840576 | orchestrator | 2026-01-03 02:44:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:20.840604 | orchestrator | 2026-01-03 02:44:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:23.892382 | orchestrator | 2026-01-03 02:44:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:23.893514 | orchestrator | 2026-01-03 02:44:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:23.893743 | orchestrator | 2026-01-03 02:44:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:26.939107 | orchestrator | 2026-01-03 02:44:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:26.940853 | orchestrator | 2026-01-03 02:44:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:26.940907 | orchestrator | 2026-01-03 02:44:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:29.982726 | orchestrator | 2026-01-03 02:44:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:29.984436 | orchestrator | 2026-01-03 02:44:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:29.984553 | orchestrator | 2026-01-03 02:44:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:33.033717 | orchestrator | 2026-01-03 02:44:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:33.034934 | orchestrator | 2026-01-03 02:44:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:33.035057 | orchestrator | 2026-01-03 02:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:36.087090 | orchestrator | 2026-01-03 02:44:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:36.089267 | orchestrator | 2026-01-03 02:44:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:36.089420 | orchestrator | 2026-01-03 02:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:39.135443 | orchestrator | 2026-01-03 02:44:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:39.137200 | orchestrator | 2026-01-03 02:44:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:39.137330 | orchestrator | 2026-01-03 02:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:42.176918 | orchestrator | 2026-01-03 02:44:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:42.178811 | orchestrator | 2026-01-03 02:44:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:42.178861 | orchestrator | 2026-01-03 02:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:45.216047 | orchestrator | 2026-01-03 02:44:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:45.218268 | orchestrator | 2026-01-03 02:44:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:45.218344 | orchestrator | 2026-01-03 02:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:48.260459 | orchestrator | 2026-01-03 02:44:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:48.261532 | orchestrator | 2026-01-03 02:44:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:48.261573 | orchestrator | 2026-01-03 02:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:51.302247 | orchestrator | 2026-01-03 02:44:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:51.303703 | orchestrator | 2026-01-03 02:44:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:51.303770 | orchestrator | 2026-01-03 02:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:54.348854 | orchestrator | 2026-01-03 02:44:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:54.351178 | orchestrator | 2026-01-03 02:44:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:54.351244 | orchestrator | 2026-01-03 02:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:57.396744 | orchestrator | 2026-01-03 02:44:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:44:57.399336 | orchestrator | 2026-01-03 02:44:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:44:57.399456 | orchestrator | 2026-01-03 02:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:00.441707 | orchestrator | 2026-01-03 02:45:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:00.442723 | orchestrator | 2026-01-03 02:45:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:00.442783 | orchestrator | 2026-01-03 02:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:03.492221 | orchestrator | 2026-01-03 02:45:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:03.493714 | orchestrator | 2026-01-03 02:45:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:03.493760 | orchestrator | 2026-01-03 02:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:06.540691 | orchestrator | 2026-01-03 02:45:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:06.541353 | orchestrator | 2026-01-03 02:45:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:06.541376 | orchestrator | 2026-01-03 02:45:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:09.592909 | orchestrator | 2026-01-03 02:45:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:09.594586 | orchestrator | 2026-01-03 02:45:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:09.594639 | orchestrator | 2026-01-03 02:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:12.642421 | orchestrator | 2026-01-03 02:45:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:12.644065 | orchestrator | 2026-01-03 02:45:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:12.644177 | orchestrator | 2026-01-03 02:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:15.694263 | orchestrator | 2026-01-03 02:45:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:15.696254 | orchestrator | 2026-01-03 02:45:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:15.696342 | orchestrator | 2026-01-03 02:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:18.739957 | orchestrator | 2026-01-03 02:45:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:18.740832 | orchestrator | 2026-01-03 02:45:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:18.740872 | orchestrator | 2026-01-03 02:45:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:21.788433 | orchestrator | 2026-01-03 02:45:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:21.791420 | orchestrator | 2026-01-03 02:45:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:21.791501 | orchestrator | 2026-01-03 02:45:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:24.835808 | orchestrator | 2026-01-03 02:45:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:24.837992 | orchestrator | 2026-01-03 02:45:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:24.838102 | orchestrator | 2026-01-03 02:45:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:27.882361 | orchestrator | 2026-01-03 02:45:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:27.884155 | orchestrator | 2026-01-03 02:45:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:27.884227 | orchestrator | 2026-01-03 02:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:30.928528 | orchestrator | 2026-01-03 02:45:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:30.929914 | orchestrator | 2026-01-03 02:45:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:30.930191 | orchestrator | 2026-01-03 02:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:33.976919 | orchestrator | 2026-01-03 02:45:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:33.978575 | orchestrator | 2026-01-03 02:45:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:33.978615 | orchestrator | 2026-01-03 02:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:37.023657 | orchestrator | 2026-01-03 02:45:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:37.025182 | orchestrator | 2026-01-03 02:45:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:37.025281 | orchestrator | 2026-01-03 02:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:40.083804 | orchestrator | 2026-01-03 02:45:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:40.084919 | orchestrator | 2026-01-03 02:45:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:40.084992 | orchestrator | 2026-01-03 02:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:43.124068 | orchestrator | 2026-01-03 02:45:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:43.124168 | orchestrator | 2026-01-03 02:45:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:43.124225 | orchestrator | 2026-01-03 02:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:46.169070 | orchestrator | 2026-01-03 02:45:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:46.170842 | orchestrator | 2026-01-03 02:45:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:46.170898 | orchestrator | 2026-01-03 02:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:49.219658 | orchestrator | 2026-01-03 02:45:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:49.221027 | orchestrator | 2026-01-03 02:45:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:49.221089 | orchestrator | 2026-01-03 02:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:52.263908 | orchestrator | 2026-01-03 02:45:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:52.265635 | orchestrator | 2026-01-03 02:45:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:52.265681 | orchestrator | 2026-01-03 02:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:55.309829 | orchestrator | 2026-01-03 02:45:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:55.313016 | orchestrator | 2026-01-03 02:45:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:55.313114 | orchestrator | 2026-01-03 02:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:58.355826 | orchestrator | 2026-01-03 02:45:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:45:58.357509 | orchestrator | 2026-01-03 02:45:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:45:58.357563 | orchestrator | 2026-01-03 02:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:01.404406 | orchestrator | 2026-01-03 02:46:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:01.406336 | orchestrator | 2026-01-03 02:46:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:01.406497 | orchestrator | 2026-01-03 02:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:04.453586 | orchestrator | 2026-01-03 02:46:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:04.454658 | orchestrator | 2026-01-03 02:46:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:04.454707 | orchestrator | 2026-01-03 02:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:07.502227 | orchestrator | 2026-01-03 02:46:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:07.505555 | orchestrator | 2026-01-03 02:46:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:07.505626 | orchestrator | 2026-01-03 02:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:10.557888 | orchestrator | 2026-01-03 02:46:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:10.558868 | orchestrator | 2026-01-03 02:46:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:10.558927 | orchestrator | 2026-01-03 02:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:13.606513 | orchestrator | 2026-01-03 02:46:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:13.608885 | orchestrator | 2026-01-03 02:46:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:13.608987 | orchestrator | 2026-01-03 02:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:16.653442 | orchestrator | 2026-01-03 02:46:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:16.654548 | orchestrator | 2026-01-03 02:46:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:16.654858 | orchestrator | 2026-01-03 02:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:19.705568 | orchestrator | 2026-01-03 02:46:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:19.708674 | orchestrator | 2026-01-03 02:46:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:19.708744 | orchestrator | 2026-01-03 02:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:22.754992 | orchestrator | 2026-01-03 02:46:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:22.756248 | orchestrator | 2026-01-03 02:46:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:22.756427 | orchestrator | 2026-01-03 02:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:25.804181 | orchestrator | 2026-01-03 02:46:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:25.806375 | orchestrator | 2026-01-03 02:46:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:25.806427 | orchestrator | 2026-01-03 02:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:28.852484 | orchestrator | 2026-01-03 02:46:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:28.854216 | orchestrator | 2026-01-03 02:46:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:28.854267 | orchestrator | 2026-01-03 02:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:31.898061 | orchestrator | 2026-01-03 02:46:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:31.901639 | orchestrator | 2026-01-03 02:46:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:31.901699 | orchestrator | 2026-01-03 02:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:34.947300 | orchestrator | 2026-01-03 02:46:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:34.950398 | orchestrator | 2026-01-03 02:46:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:34.950508 | orchestrator | 2026-01-03 02:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:37.995622 | orchestrator | 2026-01-03 02:46:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:37.997478 | orchestrator | 2026-01-03 02:46:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:37.997793 | orchestrator | 2026-01-03 02:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:41.044685 | orchestrator | 2026-01-03 02:46:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:41.047151 | orchestrator | 2026-01-03 02:46:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:41.047229 | orchestrator | 2026-01-03 02:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:44.094295 | orchestrator | 2026-01-03 02:46:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:44.097862 | orchestrator | 2026-01-03 02:46:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:44.097933 | orchestrator | 2026-01-03 02:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:47.135785 | orchestrator | 2026-01-03 02:46:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:47.137078 | orchestrator | 2026-01-03 02:46:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:47.137170 | orchestrator | 2026-01-03 02:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:50.185388 | orchestrator | 2026-01-03 02:46:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:50.187020 | orchestrator | 2026-01-03 02:46:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:50.187089 | orchestrator | 2026-01-03 02:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:53.230901 | orchestrator | 2026-01-03 02:46:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:53.233074 | orchestrator | 2026-01-03 02:46:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:53.233188 | orchestrator | 2026-01-03 02:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:56.277893 | orchestrator | 2026-01-03 02:46:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:56.279529 | orchestrator | 2026-01-03 02:46:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:56.279576 | orchestrator | 2026-01-03 02:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:59.324209 | orchestrator | 2026-01-03 02:46:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:46:59.325473 | orchestrator | 2026-01-03 02:46:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:46:59.325525 | orchestrator | 2026-01-03 02:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:02.365927 | orchestrator | 2026-01-03 02:47:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:02.367365 | orchestrator | 2026-01-03 02:47:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:02.367484 | orchestrator | 2026-01-03 02:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:05.408182 | orchestrator | 2026-01-03 02:47:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:05.410320 | orchestrator | 2026-01-03 02:47:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:05.410400 | orchestrator | 2026-01-03 02:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:08.451317 | orchestrator | 2026-01-03 02:47:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:08.454157 | orchestrator | 2026-01-03 02:47:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:08.454223 | orchestrator | 2026-01-03 02:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:11.497260 | orchestrator | 2026-01-03 02:47:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:11.500059 | orchestrator | 2026-01-03 02:47:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:11.500230 | orchestrator | 2026-01-03 02:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:14.544518 | orchestrator | 2026-01-03 02:47:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:14.545487 | orchestrator | 2026-01-03 02:47:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:14.545874 | orchestrator | 2026-01-03 02:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:17.590883 | orchestrator | 2026-01-03 02:47:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:17.592814 | orchestrator | 2026-01-03 02:47:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:17.592892 | orchestrator | 2026-01-03 02:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:20.634429 | orchestrator | 2026-01-03 02:47:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:20.636946 | orchestrator | 2026-01-03 02:47:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:20.637003 | orchestrator | 2026-01-03 02:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:23.678866 | orchestrator | 2026-01-03 02:47:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:23.681284 | orchestrator | 2026-01-03 02:47:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:23.681357 | orchestrator | 2026-01-03 02:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:26.733043 | orchestrator | 2026-01-03 02:47:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:26.734854 | orchestrator | 2026-01-03 02:47:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:26.734927 | orchestrator | 2026-01-03 02:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:29.783524 | orchestrator | 2026-01-03 02:47:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:29.786745 | orchestrator | 2026-01-03 02:47:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:29.786835 | orchestrator | 2026-01-03 02:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:32.829641 | orchestrator | 2026-01-03 02:47:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:32.833790 | orchestrator | 2026-01-03 02:47:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:32.833884 | orchestrator | 2026-01-03 02:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:35.881210 | orchestrator | 2026-01-03 02:47:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:35.882657 | orchestrator | 2026-01-03 02:47:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:35.882710 | orchestrator | 2026-01-03 02:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:38.931016 | orchestrator | 2026-01-03 02:47:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:38.934213 | orchestrator | 2026-01-03 02:47:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:38.934279 | orchestrator | 2026-01-03 02:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:41.986562 | orchestrator | 2026-01-03 02:47:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:41.988740 | orchestrator | 2026-01-03 02:47:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:41.988807 | orchestrator | 2026-01-03 02:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:45.049244 | orchestrator | 2026-01-03 02:47:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:45.050654 | orchestrator | 2026-01-03 02:47:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:45.050703 | orchestrator | 2026-01-03 02:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:48.094861 | orchestrator | 2026-01-03 02:47:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:48.096253 | orchestrator | 2026-01-03 02:47:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:48.096305 | orchestrator | 2026-01-03 02:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:51.144784 | orchestrator | 2026-01-03 02:47:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:51.147368 | orchestrator | 2026-01-03 02:47:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:51.147580 | orchestrator | 2026-01-03 02:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:54.192008 | orchestrator | 2026-01-03 02:47:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:54.193357 | orchestrator | 2026-01-03 02:47:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:54.193475 | orchestrator | 2026-01-03 02:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:57.241010 | orchestrator | 2026-01-03 02:47:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:47:57.243327 | orchestrator | 2026-01-03 02:47:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:47:57.243497 | orchestrator | 2026-01-03 02:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:00.288070 | orchestrator | 2026-01-03 02:48:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:00.289848 | orchestrator | 2026-01-03 02:48:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:00.289903 | orchestrator | 2026-01-03 02:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:03.334422 | orchestrator | 2026-01-03 02:48:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:03.334660 | orchestrator | 2026-01-03 02:48:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:03.335148 | orchestrator | 2026-01-03 02:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:06.378367 | orchestrator | 2026-01-03 02:48:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:06.380292 | orchestrator | 2026-01-03 02:48:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:06.380343 | orchestrator | 2026-01-03 02:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:09.422927 | orchestrator | 2026-01-03 02:48:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:09.424549 | orchestrator | 2026-01-03 02:48:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:09.424597 | orchestrator | 2026-01-03 02:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:12.478491 | orchestrator | 2026-01-03 02:48:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:12.480175 | orchestrator | 2026-01-03 02:48:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:12.480256 | orchestrator | 2026-01-03 02:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:15.520626 | orchestrator | 2026-01-03 02:48:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:15.522765 | orchestrator | 2026-01-03 02:48:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:15.522844 | orchestrator | 2026-01-03 02:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:18.570985 | orchestrator | 2026-01-03 02:48:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:18.573286 | orchestrator | 2026-01-03 02:48:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:18.573344 | orchestrator | 2026-01-03 02:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:21.617005 | orchestrator | 2026-01-03 02:48:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:21.619044 | orchestrator | 2026-01-03 02:48:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:21.619239 | orchestrator | 2026-01-03 02:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:24.664732 | orchestrator | 2026-01-03 02:48:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:24.666992 | orchestrator | 2026-01-03 02:48:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:24.667037 | orchestrator | 2026-01-03 02:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:27.715768 | orchestrator | 2026-01-03 02:48:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:27.717235 | orchestrator | 2026-01-03 02:48:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:27.717424 | orchestrator | 2026-01-03 02:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:30.768392 | orchestrator | 2026-01-03 02:48:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:30.769875 | orchestrator | 2026-01-03 02:48:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:30.769925 | orchestrator | 2026-01-03 02:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:33.814956 | orchestrator | 2026-01-03 02:48:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:33.816738 | orchestrator | 2026-01-03 02:48:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:33.816838 | orchestrator | 2026-01-03 02:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:36.864332 | orchestrator | 2026-01-03 02:48:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:36.866447 | orchestrator | 2026-01-03 02:48:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:36.866498 | orchestrator | 2026-01-03 02:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:39.909584 | orchestrator | 2026-01-03 02:48:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:39.911045 | orchestrator | 2026-01-03 02:48:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:39.911119 | orchestrator | 2026-01-03 02:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:42.960344 | orchestrator | 2026-01-03 02:48:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:42.960899 | orchestrator | 2026-01-03 02:48:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:42.961039 | orchestrator | 2026-01-03 02:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:46.009559 | orchestrator | 2026-01-03 02:48:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:46.012200 | orchestrator | 2026-01-03 02:48:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:46.012304 | orchestrator | 2026-01-03 02:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:49.056408 | orchestrator | 2026-01-03 02:48:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:49.058667 | orchestrator | 2026-01-03 02:48:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:49.058730 | orchestrator | 2026-01-03 02:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:52.100628 | orchestrator | 2026-01-03 02:48:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:52.102899 | orchestrator | 2026-01-03 02:48:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:52.103077 | orchestrator | 2026-01-03 02:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:55.150184 | orchestrator | 2026-01-03 02:48:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:55.152130 | orchestrator | 2026-01-03 02:48:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:55.152207 | orchestrator | 2026-01-03 02:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:58.198935 | orchestrator | 2026-01-03 02:48:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:48:58.200606 | orchestrator | 2026-01-03 02:48:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:48:58.200686 | orchestrator | 2026-01-03 02:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:01.239825 | orchestrator | 2026-01-03 02:49:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:01.241476 | orchestrator | 2026-01-03 02:49:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:01.242086 | orchestrator | 2026-01-03 02:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:04.283176 | orchestrator | 2026-01-03 02:49:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:04.285212 | orchestrator | 2026-01-03 02:49:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:04.285297 | orchestrator | 2026-01-03 02:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:07.329856 | orchestrator | 2026-01-03 02:49:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:07.331533 | orchestrator | 2026-01-03 02:49:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:07.331607 | orchestrator | 2026-01-03 02:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:10.376842 | orchestrator | 2026-01-03 02:49:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:10.379432 | orchestrator | 2026-01-03 02:49:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:10.379509 | orchestrator | 2026-01-03 02:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:13.440355 | orchestrator | 2026-01-03 02:49:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:13.443357 | orchestrator | 2026-01-03 02:49:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:13.443423 | orchestrator | 2026-01-03 02:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:16.487576 | orchestrator | 2026-01-03 02:49:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:16.488516 | orchestrator | 2026-01-03 02:49:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:16.488552 | orchestrator | 2026-01-03 02:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:19.532567 | orchestrator | 2026-01-03 02:49:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:19.533991 | orchestrator | 2026-01-03 02:49:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:19.534125 | orchestrator | 2026-01-03 02:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:22.581997 | orchestrator | 2026-01-03 02:49:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:22.584530 | orchestrator | 2026-01-03 02:49:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:22.584635 | orchestrator | 2026-01-03 02:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:25.626148 | orchestrator | 2026-01-03 02:49:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:25.628265 | orchestrator | 2026-01-03 02:49:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:25.628423 | orchestrator | 2026-01-03 02:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:28.676086 | orchestrator | 2026-01-03 02:49:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:28.677209 | orchestrator | 2026-01-03 02:49:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:28.677244 | orchestrator | 2026-01-03 02:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:31.721568 | orchestrator | 2026-01-03 02:49:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:31.723553 | orchestrator | 2026-01-03 02:49:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:31.723600 | orchestrator | 2026-01-03 02:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:34.771482 | orchestrator | 2026-01-03 02:49:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:34.773431 | orchestrator | 2026-01-03 02:49:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:34.773500 | orchestrator | 2026-01-03 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:37.819443 | orchestrator | 2026-01-03 02:49:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:37.821671 | orchestrator | 2026-01-03 02:49:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:37.821705 | orchestrator | 2026-01-03 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:40.867195 | orchestrator | 2026-01-03 02:49:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:40.869477 | orchestrator | 2026-01-03 02:49:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:40.869717 | orchestrator | 2026-01-03 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:43.911812 | orchestrator | 2026-01-03 02:49:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:43.913764 | orchestrator | 2026-01-03 02:49:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:43.913849 | orchestrator | 2026-01-03 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:46.956024 | orchestrator | 2026-01-03 02:49:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:46.957855 | orchestrator | 2026-01-03 02:49:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:46.957891 | orchestrator | 2026-01-03 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:50.003386 | orchestrator | 2026-01-03 02:49:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:50.005111 | orchestrator | 2026-01-03 02:49:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:50.005170 | orchestrator | 2026-01-03 02:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:53.051014 | orchestrator | 2026-01-03 02:49:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:53.053315 | orchestrator | 2026-01-03 02:49:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:53.053435 | orchestrator | 2026-01-03 02:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:56.099394 | orchestrator | 2026-01-03 02:49:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:56.100962 | orchestrator | 2026-01-03 02:49:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:56.100991 | orchestrator | 2026-01-03 02:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:59.149943 | orchestrator | 2026-01-03 02:49:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:49:59.152378 | orchestrator | 2026-01-03 02:49:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:49:59.152509 | orchestrator | 2026-01-03 02:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:02.198380 | orchestrator | 2026-01-03 02:50:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:02.199438 | orchestrator | 2026-01-03 02:50:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:02.199639 | orchestrator | 2026-01-03 02:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:05.244811 | orchestrator | 2026-01-03 02:50:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:05.246215 | orchestrator | 2026-01-03 02:50:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:05.246558 | orchestrator | 2026-01-03 02:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:08.288562 | orchestrator | 2026-01-03 02:50:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:08.289938 | orchestrator | 2026-01-03 02:50:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:08.290079 | orchestrator | 2026-01-03 02:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:11.336774 | orchestrator | 2026-01-03 02:50:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:11.338339 | orchestrator | 2026-01-03 02:50:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:11.338391 | orchestrator | 2026-01-03 02:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:14.385663 | orchestrator | 2026-01-03 02:50:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:14.387460 | orchestrator | 2026-01-03 02:50:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:14.387635 | orchestrator | 2026-01-03 02:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:17.430716 | orchestrator | 2026-01-03 02:50:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:17.432163 | orchestrator | 2026-01-03 02:50:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:17.432234 | orchestrator | 2026-01-03 02:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:20.472093 | orchestrator | 2026-01-03 02:50:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:20.473709 | orchestrator | 2026-01-03 02:50:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:20.473768 | orchestrator | 2026-01-03 02:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:23.517506 | orchestrator | 2026-01-03 02:50:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:23.519400 | orchestrator | 2026-01-03 02:50:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:23.519443 | orchestrator | 2026-01-03 02:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:26.571506 | orchestrator | 2026-01-03 02:50:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:26.573167 | orchestrator | 2026-01-03 02:50:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:26.573234 | orchestrator | 2026-01-03 02:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:29.615481 | orchestrator | 2026-01-03 02:50:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:29.617871 | orchestrator | 2026-01-03 02:50:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:29.618679 | orchestrator | 2026-01-03 02:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:32.665795 | orchestrator | 2026-01-03 02:50:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:32.669227 | orchestrator | 2026-01-03 02:50:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:32.669371 | orchestrator | 2026-01-03 02:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:35.713602 | orchestrator | 2026-01-03 02:50:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:35.714970 | orchestrator | 2026-01-03 02:50:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:35.715024 | orchestrator | 2026-01-03 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:38.768746 | orchestrator | 2026-01-03 02:50:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:38.771848 | orchestrator | 2026-01-03 02:50:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:38.771933 | orchestrator | 2026-01-03 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:41.817750 | orchestrator | 2026-01-03 02:50:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:41.819470 | orchestrator | 2026-01-03 02:50:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:41.819570 | orchestrator | 2026-01-03 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:44.870232 | orchestrator | 2026-01-03 02:50:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:44.871567 | orchestrator | 2026-01-03 02:50:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:44.871621 | orchestrator | 2026-01-03 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:47.926082 | orchestrator | 2026-01-03 02:50:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:47.927732 | orchestrator | 2026-01-03 02:50:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:47.927797 | orchestrator | 2026-01-03 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:50.978251 | orchestrator | 2026-01-03 02:50:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:50.980469 | orchestrator | 2026-01-03 02:50:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:50.980543 | orchestrator | 2026-01-03 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:54.045123 | orchestrator | 2026-01-03 02:50:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:54.046899 | orchestrator | 2026-01-03 02:50:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:54.046941 | orchestrator | 2026-01-03 02:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:57.095478 | orchestrator | 2026-01-03 02:50:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:50:57.097566 | orchestrator | 2026-01-03 02:50:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:50:57.097828 | orchestrator | 2026-01-03 02:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:00.150598 | orchestrator | 2026-01-03 02:51:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:00.152602 | orchestrator | 2026-01-03 02:51:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:00.152846 | orchestrator | 2026-01-03 02:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:03.203496 | orchestrator | 2026-01-03 02:51:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:03.205343 | orchestrator | 2026-01-03 02:51:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:03.205448 | orchestrator | 2026-01-03 02:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:06.256811 | orchestrator | 2026-01-03 02:51:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:06.258847 | orchestrator | 2026-01-03 02:51:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:06.258899 | orchestrator | 2026-01-03 02:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:09.305724 | orchestrator | 2026-01-03 02:51:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:09.308649 | orchestrator | 2026-01-03 02:51:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:09.308721 | orchestrator | 2026-01-03 02:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:12.357141 | orchestrator | 2026-01-03 02:51:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:12.358782 | orchestrator | 2026-01-03 02:51:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:12.358817 | orchestrator | 2026-01-03 02:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:15.393139 | orchestrator | 2026-01-03 02:51:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:15.395684 | orchestrator | 2026-01-03 02:51:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:15.395770 | orchestrator | 2026-01-03 02:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:18.435094 | orchestrator | 2026-01-03 02:51:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:18.436609 | orchestrator | 2026-01-03 02:51:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:18.436747 | orchestrator | 2026-01-03 02:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:21.482549 | orchestrator | 2026-01-03 02:51:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:21.483924 | orchestrator | 2026-01-03 02:51:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:21.484032 | orchestrator | 2026-01-03 02:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:24.531386 | orchestrator | 2026-01-03 02:51:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:24.533776 | orchestrator | 2026-01-03 02:51:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:24.533836 | orchestrator | 2026-01-03 02:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:27.576640 | orchestrator | 2026-01-03 02:51:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:27.578697 | orchestrator | 2026-01-03 02:51:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:27.578981 | orchestrator | 2026-01-03 02:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:30.625008 | orchestrator | 2026-01-03 02:51:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:30.627251 | orchestrator | 2026-01-03 02:51:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:30.627427 | orchestrator | 2026-01-03 02:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:33.680040 | orchestrator | 2026-01-03 02:51:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:33.681208 | orchestrator | 2026-01-03 02:51:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:33.681313 | orchestrator | 2026-01-03 02:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:36.727597 | orchestrator | 2026-01-03 02:51:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:36.729170 | orchestrator | 2026-01-03 02:51:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:36.729222 | orchestrator | 2026-01-03 02:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:39.775583 | orchestrator | 2026-01-03 02:51:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:39.777219 | orchestrator | 2026-01-03 02:51:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:39.777269 | orchestrator | 2026-01-03 02:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:42.821774 | orchestrator | 2026-01-03 02:51:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:42.823710 | orchestrator | 2026-01-03 02:51:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:42.823807 | orchestrator | 2026-01-03 02:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:45.875161 | orchestrator | 2026-01-03 02:51:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:45.877338 | orchestrator | 2026-01-03 02:51:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:45.877393 | orchestrator | 2026-01-03 02:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:48.929512 | orchestrator | 2026-01-03 02:51:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:48.930804 | orchestrator | 2026-01-03 02:51:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:48.930960 | orchestrator | 2026-01-03 02:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:51.981196 | orchestrator | 2026-01-03 02:51:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:51.983052 | orchestrator | 2026-01-03 02:51:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:51.983254 | orchestrator | 2026-01-03 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:55.035887 | orchestrator | 2026-01-03 02:51:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:55.037077 | orchestrator | 2026-01-03 02:51:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:55.037118 | orchestrator | 2026-01-03 02:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:58.080563 | orchestrator | 2026-01-03 02:51:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:51:58.081425 | orchestrator | 2026-01-03 02:51:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:51:58.081512 | orchestrator | 2026-01-03 02:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:01.118467 | orchestrator | 2026-01-03 02:52:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:01.120253 | orchestrator | 2026-01-03 02:52:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:01.120309 | orchestrator | 2026-01-03 02:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:04.167036 | orchestrator | 2026-01-03 02:52:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:04.168976 | orchestrator | 2026-01-03 02:52:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:04.169016 | orchestrator | 2026-01-03 02:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:07.217076 | orchestrator | 2026-01-03 02:52:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:07.218196 | orchestrator | 2026-01-03 02:52:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:07.218246 | orchestrator | 2026-01-03 02:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:10.265787 | orchestrator | 2026-01-03 02:52:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:10.267382 | orchestrator | 2026-01-03 02:52:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:10.267744 | orchestrator | 2026-01-03 02:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:13.317704 | orchestrator | 2026-01-03 02:52:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:13.319424 | orchestrator | 2026-01-03 02:52:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:13.319531 | orchestrator | 2026-01-03 02:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:16.368130 | orchestrator | 2026-01-03 02:52:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:16.370133 | orchestrator | 2026-01-03 02:52:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:16.370288 | orchestrator | 2026-01-03 02:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:19.421871 | orchestrator | 2026-01-03 02:52:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:19.423008 | orchestrator | 2026-01-03 02:52:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:19.423279 | orchestrator | 2026-01-03 02:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:22.469143 | orchestrator | 2026-01-03 02:52:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:22.470837 | orchestrator | 2026-01-03 02:52:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:22.471143 | orchestrator | 2026-01-03 02:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:25.514783 | orchestrator | 2026-01-03 02:52:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:25.518118 | orchestrator | 2026-01-03 02:52:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:25.518197 | orchestrator | 2026-01-03 02:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:28.559136 | orchestrator | 2026-01-03 02:52:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:28.560507 | orchestrator | 2026-01-03 02:52:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:28.560616 | orchestrator | 2026-01-03 02:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:31.607587 | orchestrator | 2026-01-03 02:52:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:31.608445 | orchestrator | 2026-01-03 02:52:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:31.608480 | orchestrator | 2026-01-03 02:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:34.657514 | orchestrator | 2026-01-03 02:52:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:34.659368 | orchestrator | 2026-01-03 02:52:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:34.659469 | orchestrator | 2026-01-03 02:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:37.707834 | orchestrator | 2026-01-03 02:52:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:37.709625 | orchestrator | 2026-01-03 02:52:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:37.709766 | orchestrator | 2026-01-03 02:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:40.758320 | orchestrator | 2026-01-03 02:52:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:40.761508 | orchestrator | 2026-01-03 02:52:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:40.761582 | orchestrator | 2026-01-03 02:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:43.805804 | orchestrator | 2026-01-03 02:52:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:43.806532 | orchestrator | 2026-01-03 02:52:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:43.806814 | orchestrator | 2026-01-03 02:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:46.853130 | orchestrator | 2026-01-03 02:52:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:46.855431 | orchestrator | 2026-01-03 02:52:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:46.855539 | orchestrator | 2026-01-03 02:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:49.907155 | orchestrator | 2026-01-03 02:52:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:49.910079 | orchestrator | 2026-01-03 02:52:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:49.910194 | orchestrator | 2026-01-03 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:52.955579 | orchestrator | 2026-01-03 02:52:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:52.957195 | orchestrator | 2026-01-03 02:52:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:52.957243 | orchestrator | 2026-01-03 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:56.004258 | orchestrator | 2026-01-03 02:52:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:56.005944 | orchestrator | 2026-01-03 02:52:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:56.006009 | orchestrator | 2026-01-03 02:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:59.058992 | orchestrator | 2026-01-03 02:52:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:52:59.059884 | orchestrator | 2026-01-03 02:52:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:52:59.059915 | orchestrator | 2026-01-03 02:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:02.102813 | orchestrator | 2026-01-03 02:53:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:02.103820 | orchestrator | 2026-01-03 02:53:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:02.103986 | orchestrator | 2026-01-03 02:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:05.142610 | orchestrator | 2026-01-03 02:53:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:05.144200 | orchestrator | 2026-01-03 02:53:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:05.144252 | orchestrator | 2026-01-03 02:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:08.192343 | orchestrator | 2026-01-03 02:53:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:08.194457 | orchestrator | 2026-01-03 02:53:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:08.194523 | orchestrator | 2026-01-03 02:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:11.244635 | orchestrator | 2026-01-03 02:53:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:11.246464 | orchestrator | 2026-01-03 02:53:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:11.246569 | orchestrator | 2026-01-03 02:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:14.293803 | orchestrator | 2026-01-03 02:53:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:14.295266 | orchestrator | 2026-01-03 02:53:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:14.295310 | orchestrator | 2026-01-03 02:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:17.338908 | orchestrator | 2026-01-03 02:53:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:17.340739 | orchestrator | 2026-01-03 02:53:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:17.340795 | orchestrator | 2026-01-03 02:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:20.389381 | orchestrator | 2026-01-03 02:53:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:20.391449 | orchestrator | 2026-01-03 02:53:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:20.391520 | orchestrator | 2026-01-03 02:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:23.439385 | orchestrator | 2026-01-03 02:53:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:23.440509 | orchestrator | 2026-01-03 02:53:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:23.440572 | orchestrator | 2026-01-03 02:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:26.488431 | orchestrator | 2026-01-03 02:53:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:26.490503 | orchestrator | 2026-01-03 02:53:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:26.490567 | orchestrator | 2026-01-03 02:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:29.536593 | orchestrator | 2026-01-03 02:53:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:29.539597 | orchestrator | 2026-01-03 02:53:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:29.539744 | orchestrator | 2026-01-03 02:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:32.582621 | orchestrator | 2026-01-03 02:53:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:32.584362 | orchestrator | 2026-01-03 02:53:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:32.584402 | orchestrator | 2026-01-03 02:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:35.623953 | orchestrator | 2026-01-03 02:53:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:35.625378 | orchestrator | 2026-01-03 02:53:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:35.625467 | orchestrator | 2026-01-03 02:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:38.675203 | orchestrator | 2026-01-03 02:53:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:38.677912 | orchestrator | 2026-01-03 02:53:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:38.678125 | orchestrator | 2026-01-03 02:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:41.723714 | orchestrator | 2026-01-03 02:53:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:41.726652 | orchestrator | 2026-01-03 02:53:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:41.726714 | orchestrator | 2026-01-03 02:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:44.767522 | orchestrator | 2026-01-03 02:53:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:44.769672 | orchestrator | 2026-01-03 02:53:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:44.769737 | orchestrator | 2026-01-03 02:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:47.819734 | orchestrator | 2026-01-03 02:53:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:47.820792 | orchestrator | 2026-01-03 02:53:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:47.820898 | orchestrator | 2026-01-03 02:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:50.867672 | orchestrator | 2026-01-03 02:53:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:50.869473 | orchestrator | 2026-01-03 02:53:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:50.869527 | orchestrator | 2026-01-03 02:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:53.913897 | orchestrator | 2026-01-03 02:53:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:53.915523 | orchestrator | 2026-01-03 02:53:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:53.915614 | orchestrator | 2026-01-03 02:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:56.956864 | orchestrator | 2026-01-03 02:53:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:53:56.958307 | orchestrator | 2026-01-03 02:53:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:53:56.958351 | orchestrator | 2026-01-03 02:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:00.003456 | orchestrator | 2026-01-03 02:54:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:00.004869 | orchestrator | 2026-01-03 02:54:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:00.004947 | orchestrator | 2026-01-03 02:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:03.046668 | orchestrator | 2026-01-03 02:54:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:03.048723 | orchestrator | 2026-01-03 02:54:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:03.048808 | orchestrator | 2026-01-03 02:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:06.089074 | orchestrator | 2026-01-03 02:54:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:06.090510 | orchestrator | 2026-01-03 02:54:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:06.090580 | orchestrator | 2026-01-03 02:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:09.136371 | orchestrator | 2026-01-03 02:54:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:09.138126 | orchestrator | 2026-01-03 02:54:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:09.138477 | orchestrator | 2026-01-03 02:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:12.186769 | orchestrator | 2026-01-03 02:54:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:12.189686 | orchestrator | 2026-01-03 02:54:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:12.189721 | orchestrator | 2026-01-03 02:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:15.230361 | orchestrator | 2026-01-03 02:54:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:15.231496 | orchestrator | 2026-01-03 02:54:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:15.231619 | orchestrator | 2026-01-03 02:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:18.275651 | orchestrator | 2026-01-03 02:54:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:18.278882 | orchestrator | 2026-01-03 02:54:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:18.278933 | orchestrator | 2026-01-03 02:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:21.327029 | orchestrator | 2026-01-03 02:54:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:21.328790 | orchestrator | 2026-01-03 02:54:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:21.328833 | orchestrator | 2026-01-03 02:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:24.380729 | orchestrator | 2026-01-03 02:54:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:24.383938 | orchestrator | 2026-01-03 02:54:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:24.384001 | orchestrator | 2026-01-03 02:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:27.432505 | orchestrator | 2026-01-03 02:54:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:27.435589 | orchestrator | 2026-01-03 02:54:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:27.435701 | orchestrator | 2026-01-03 02:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:30.484110 | orchestrator | 2026-01-03 02:54:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:30.486090 | orchestrator | 2026-01-03 02:54:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:30.486189 | orchestrator | 2026-01-03 02:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:33.535669 | orchestrator | 2026-01-03 02:54:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:33.536702 | orchestrator | 2026-01-03 02:54:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:33.536750 | orchestrator | 2026-01-03 02:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:36.583623 | orchestrator | 2026-01-03 02:54:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:36.584579 | orchestrator | 2026-01-03 02:54:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:36.584635 | orchestrator | 2026-01-03 02:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:39.633335 | orchestrator | 2026-01-03 02:54:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:39.635986 | orchestrator | 2026-01-03 02:54:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:39.636115 | orchestrator | 2026-01-03 02:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:42.679117 | orchestrator | 2026-01-03 02:54:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:42.679786 | orchestrator | 2026-01-03 02:54:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:42.679805 | orchestrator | 2026-01-03 02:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:45.728753 | orchestrator | 2026-01-03 02:54:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:45.732914 | orchestrator | 2026-01-03 02:54:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:45.732965 | orchestrator | 2026-01-03 02:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:48.783956 | orchestrator | 2026-01-03 02:54:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:48.786359 | orchestrator | 2026-01-03 02:54:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:48.786412 | orchestrator | 2026-01-03 02:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:51.837869 | orchestrator | 2026-01-03 02:54:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:51.839816 | orchestrator | 2026-01-03 02:54:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:51.839868 | orchestrator | 2026-01-03 02:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:54.888994 | orchestrator | 2026-01-03 02:54:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:54.890266 | orchestrator | 2026-01-03 02:54:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:54.890308 | orchestrator | 2026-01-03 02:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:57.936517 | orchestrator | 2026-01-03 02:54:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:54:57.938190 | orchestrator | 2026-01-03 02:54:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:54:57.938257 | orchestrator | 2026-01-03 02:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:00.983942 | orchestrator | 2026-01-03 02:55:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:00.985655 | orchestrator | 2026-01-03 02:55:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:00.985712 | orchestrator | 2026-01-03 02:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:04.041239 | orchestrator | 2026-01-03 02:55:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:04.043651 | orchestrator | 2026-01-03 02:55:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:04.043725 | orchestrator | 2026-01-03 02:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:07.095826 | orchestrator | 2026-01-03 02:55:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:07.097546 | orchestrator | 2026-01-03 02:55:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:07.097669 | orchestrator | 2026-01-03 02:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:10.148939 | orchestrator | 2026-01-03 02:55:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:10.150559 | orchestrator | 2026-01-03 02:55:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:10.150697 | orchestrator | 2026-01-03 02:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:13.200431 | orchestrator | 2026-01-03 02:55:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:13.202975 | orchestrator | 2026-01-03 02:55:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:13.203042 | orchestrator | 2026-01-03 02:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:16.253840 | orchestrator | 2026-01-03 02:55:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:16.255807 | orchestrator | 2026-01-03 02:55:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:16.255917 | orchestrator | 2026-01-03 02:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:19.301938 | orchestrator | 2026-01-03 02:55:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:19.304738 | orchestrator | 2026-01-03 02:55:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:19.304813 | orchestrator | 2026-01-03 02:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:22.348661 | orchestrator | 2026-01-03 02:55:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:22.350545 | orchestrator | 2026-01-03 02:55:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:22.350613 | orchestrator | 2026-01-03 02:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:25.400998 | orchestrator | 2026-01-03 02:55:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:25.402764 | orchestrator | 2026-01-03 02:55:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:25.402815 | orchestrator | 2026-01-03 02:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:28.451545 | orchestrator | 2026-01-03 02:55:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:28.452794 | orchestrator | 2026-01-03 02:55:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:28.452850 | orchestrator | 2026-01-03 02:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:31.503178 | orchestrator | 2026-01-03 02:55:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:31.507661 | orchestrator | 2026-01-03 02:55:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:31.508038 | orchestrator | 2026-01-03 02:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:34.559010 | orchestrator | 2026-01-03 02:55:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:34.561009 | orchestrator | 2026-01-03 02:55:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:34.561108 | orchestrator | 2026-01-03 02:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:37.611607 | orchestrator | 2026-01-03 02:55:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:37.613429 | orchestrator | 2026-01-03 02:55:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:37.613570 | orchestrator | 2026-01-03 02:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:40.669294 | orchestrator | 2026-01-03 02:55:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:40.671147 | orchestrator | 2026-01-03 02:55:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:40.671254 | orchestrator | 2026-01-03 02:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:43.718277 | orchestrator | 2026-01-03 02:55:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:43.720314 | orchestrator | 2026-01-03 02:55:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:43.720362 | orchestrator | 2026-01-03 02:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:46.772048 | orchestrator | 2026-01-03 02:55:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:46.773989 | orchestrator | 2026-01-03 02:55:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:46.774121 | orchestrator | 2026-01-03 02:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:49.818264 | orchestrator | 2026-01-03 02:55:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:49.820625 | orchestrator | 2026-01-03 02:55:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:49.820686 | orchestrator | 2026-01-03 02:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:52.870360 | orchestrator | 2026-01-03 02:55:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:52.871808 | orchestrator | 2026-01-03 02:55:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:52.871860 | orchestrator | 2026-01-03 02:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:55.924373 | orchestrator | 2026-01-03 02:55:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:55.926126 | orchestrator | 2026-01-03 02:55:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:55.926200 | orchestrator | 2026-01-03 02:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:58.977025 | orchestrator | 2026-01-03 02:55:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:55:58.978722 | orchestrator | 2026-01-03 02:55:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:55:58.978875 | orchestrator | 2026-01-03 02:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:02.028027 | orchestrator | 2026-01-03 02:56:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:02.029717 | orchestrator | 2026-01-03 02:56:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:02.030000 | orchestrator | 2026-01-03 02:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:05.082086 | orchestrator | 2026-01-03 02:56:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:05.083455 | orchestrator | 2026-01-03 02:56:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:05.083551 | orchestrator | 2026-01-03 02:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:08.128972 | orchestrator | 2026-01-03 02:56:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:08.130096 | orchestrator | 2026-01-03 02:56:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:08.130182 | orchestrator | 2026-01-03 02:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:11.175880 | orchestrator | 2026-01-03 02:56:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:11.177807 | orchestrator | 2026-01-03 02:56:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:11.177846 | orchestrator | 2026-01-03 02:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:14.227147 | orchestrator | 2026-01-03 02:56:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:14.228828 | orchestrator | 2026-01-03 02:56:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:14.228871 | orchestrator | 2026-01-03 02:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:17.285107 | orchestrator | 2026-01-03 02:56:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:17.288326 | orchestrator | 2026-01-03 02:56:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:17.288380 | orchestrator | 2026-01-03 02:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:20.340715 | orchestrator | 2026-01-03 02:56:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:20.341989 | orchestrator | 2026-01-03 02:56:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:20.342080 | orchestrator | 2026-01-03 02:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:23.390818 | orchestrator | 2026-01-03 02:56:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:23.391994 | orchestrator | 2026-01-03 02:56:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:23.392070 | orchestrator | 2026-01-03 02:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:26.442885 | orchestrator | 2026-01-03 02:56:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:26.445083 | orchestrator | 2026-01-03 02:56:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:26.445124 | orchestrator | 2026-01-03 02:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:29.496197 | orchestrator | 2026-01-03 02:56:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:29.498933 | orchestrator | 2026-01-03 02:56:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:29.499008 | orchestrator | 2026-01-03 02:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:32.544608 | orchestrator | 2026-01-03 02:56:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:32.545920 | orchestrator | 2026-01-03 02:56:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:32.545984 | orchestrator | 2026-01-03 02:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:35.586233 | orchestrator | 2026-01-03 02:56:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:35.588096 | orchestrator | 2026-01-03 02:56:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:35.588239 | orchestrator | 2026-01-03 02:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:38.633181 | orchestrator | 2026-01-03 02:56:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:38.635359 | orchestrator | 2026-01-03 02:56:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:38.635428 | orchestrator | 2026-01-03 02:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:41.683486 | orchestrator | 2026-01-03 02:56:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:41.685288 | orchestrator | 2026-01-03 02:56:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:41.685402 | orchestrator | 2026-01-03 02:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:44.727379 | orchestrator | 2026-01-03 02:56:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:44.727482 | orchestrator | 2026-01-03 02:56:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:44.729142 | orchestrator | 2026-01-03 02:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:47.775843 | orchestrator | 2026-01-03 02:56:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:47.777386 | orchestrator | 2026-01-03 02:56:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:47.777489 | orchestrator | 2026-01-03 02:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:50.823852 | orchestrator | 2026-01-03 02:56:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:50.825362 | orchestrator | 2026-01-03 02:56:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:50.825421 | orchestrator | 2026-01-03 02:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:53.872758 | orchestrator | 2026-01-03 02:56:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:53.875872 | orchestrator | 2026-01-03 02:56:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:53.875959 | orchestrator | 2026-01-03 02:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:56.923780 | orchestrator | 2026-01-03 02:56:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:56.925378 | orchestrator | 2026-01-03 02:56:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:56.925428 | orchestrator | 2026-01-03 02:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:59.976420 | orchestrator | 2026-01-03 02:56:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:56:59.978425 | orchestrator | 2026-01-03 02:56:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:56:59.978474 | orchestrator | 2026-01-03 02:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:03.026348 | orchestrator | 2026-01-03 02:57:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:03.026462 | orchestrator | 2026-01-03 02:57:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:03.026474 | orchestrator | 2026-01-03 02:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:06.070558 | orchestrator | 2026-01-03 02:57:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:06.072160 | orchestrator | 2026-01-03 02:57:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:06.072635 | orchestrator | 2026-01-03 02:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:09.112696 | orchestrator | 2026-01-03 02:57:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:09.114440 | orchestrator | 2026-01-03 02:57:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:09.114493 | orchestrator | 2026-01-03 02:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:12.161795 | orchestrator | 2026-01-03 02:57:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:12.164125 | orchestrator | 2026-01-03 02:57:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:12.164222 | orchestrator | 2026-01-03 02:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:15.207268 | orchestrator | 2026-01-03 02:57:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:15.208462 | orchestrator | 2026-01-03 02:57:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:15.208601 | orchestrator | 2026-01-03 02:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:18.251683 | orchestrator | 2026-01-03 02:57:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:18.252554 | orchestrator | 2026-01-03 02:57:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:18.252689 | orchestrator | 2026-01-03 02:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:21.294085 | orchestrator | 2026-01-03 02:57:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:21.296341 | orchestrator | 2026-01-03 02:57:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:21.296395 | orchestrator | 2026-01-03 02:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:24.340898 | orchestrator | 2026-01-03 02:57:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:24.343074 | orchestrator | 2026-01-03 02:57:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:24.343183 | orchestrator | 2026-01-03 02:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:27.385727 | orchestrator | 2026-01-03 02:57:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:27.387594 | orchestrator | 2026-01-03 02:57:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:27.387644 | orchestrator | 2026-01-03 02:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:30.427713 | orchestrator | 2026-01-03 02:57:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:30.429339 | orchestrator | 2026-01-03 02:57:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:30.429410 | orchestrator | 2026-01-03 02:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:33.478606 | orchestrator | 2026-01-03 02:57:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:33.482633 | orchestrator | 2026-01-03 02:57:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:33.483091 | orchestrator | 2026-01-03 02:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:36.528894 | orchestrator | 2026-01-03 02:57:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:36.530872 | orchestrator | 2026-01-03 02:57:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:36.530986 | orchestrator | 2026-01-03 02:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:39.579698 | orchestrator | 2026-01-03 02:57:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:39.582357 | orchestrator | 2026-01-03 02:57:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:39.582422 | orchestrator | 2026-01-03 02:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:42.636815 | orchestrator | 2026-01-03 02:57:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:42.638485 | orchestrator | 2026-01-03 02:57:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:42.638587 | orchestrator | 2026-01-03 02:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:45.683819 | orchestrator | 2026-01-03 02:57:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:45.686587 | orchestrator | 2026-01-03 02:57:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:45.686641 | orchestrator | 2026-01-03 02:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:48.738214 | orchestrator | 2026-01-03 02:57:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:48.739719 | orchestrator | 2026-01-03 02:57:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:48.739804 | orchestrator | 2026-01-03 02:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:51.788786 | orchestrator | 2026-01-03 02:57:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:51.791854 | orchestrator | 2026-01-03 02:57:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:51.791898 | orchestrator | 2026-01-03 02:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:54.840693 | orchestrator | 2026-01-03 02:57:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:54.842719 | orchestrator | 2026-01-03 02:57:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:54.842813 | orchestrator | 2026-01-03 02:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:57.891698 | orchestrator | 2026-01-03 02:57:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:57:57.892985 | orchestrator | 2026-01-03 02:57:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:57:57.893039 | orchestrator | 2026-01-03 02:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:00.936974 | orchestrator | 2026-01-03 02:58:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:00.939683 | orchestrator | 2026-01-03 02:58:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:00.939742 | orchestrator | 2026-01-03 02:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:03.989739 | orchestrator | 2026-01-03 02:58:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:03.991980 | orchestrator | 2026-01-03 02:58:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:03.992042 | orchestrator | 2026-01-03 02:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:07.035392 | orchestrator | 2026-01-03 02:58:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:07.036603 | orchestrator | 2026-01-03 02:58:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:07.036708 | orchestrator | 2026-01-03 02:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:10.087683 | orchestrator | 2026-01-03 02:58:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:10.090404 | orchestrator | 2026-01-03 02:58:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:10.090495 | orchestrator | 2026-01-03 02:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:13.140569 | orchestrator | 2026-01-03 02:58:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:13.142624 | orchestrator | 2026-01-03 02:58:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:13.142685 | orchestrator | 2026-01-03 02:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:16.188905 | orchestrator | 2026-01-03 02:58:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:16.191001 | orchestrator | 2026-01-03 02:58:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:16.191047 | orchestrator | 2026-01-03 02:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:19.238292 | orchestrator | 2026-01-03 02:58:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:19.240130 | orchestrator | 2026-01-03 02:58:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:19.240181 | orchestrator | 2026-01-03 02:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:22.283275 | orchestrator | 2026-01-03 02:58:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:22.285779 | orchestrator | 2026-01-03 02:58:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:22.285844 | orchestrator | 2026-01-03 02:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:25.329359 | orchestrator | 2026-01-03 02:58:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:25.330290 | orchestrator | 2026-01-03 02:58:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:25.330709 | orchestrator | 2026-01-03 02:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:28.379277 | orchestrator | 2026-01-03 02:58:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:28.380449 | orchestrator | 2026-01-03 02:58:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:28.380484 | orchestrator | 2026-01-03 02:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:31.426325 | orchestrator | 2026-01-03 02:58:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:31.428332 | orchestrator | 2026-01-03 02:58:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:31.428464 | orchestrator | 2026-01-03 02:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:34.477968 | orchestrator | 2026-01-03 02:58:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:34.478607 | orchestrator | 2026-01-03 02:58:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:34.478639 | orchestrator | 2026-01-03 02:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:37.522248 | orchestrator | 2026-01-03 02:58:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:37.522467 | orchestrator | 2026-01-03 02:58:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:37.522486 | orchestrator | 2026-01-03 02:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:40.571685 | orchestrator | 2026-01-03 02:58:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:40.572736 | orchestrator | 2026-01-03 02:58:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:40.572786 | orchestrator | 2026-01-03 02:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:43.621214 | orchestrator | 2026-01-03 02:58:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:43.622652 | orchestrator | 2026-01-03 02:58:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:43.622776 | orchestrator | 2026-01-03 02:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:46.667915 | orchestrator | 2026-01-03 02:58:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:46.668369 | orchestrator | 2026-01-03 02:58:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:46.668951 | orchestrator | 2026-01-03 02:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:49.716488 | orchestrator | 2026-01-03 02:58:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:49.717390 | orchestrator | 2026-01-03 02:58:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:49.717808 | orchestrator | 2026-01-03 02:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:52.766268 | orchestrator | 2026-01-03 02:58:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:52.767912 | orchestrator | 2026-01-03 02:58:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:52.768063 | orchestrator | 2026-01-03 02:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:55.814922 | orchestrator | 2026-01-03 02:58:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:55.817025 | orchestrator | 2026-01-03 02:58:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:55.817078 | orchestrator | 2026-01-03 02:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:58.862937 | orchestrator | 2026-01-03 02:58:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:58:58.864792 | orchestrator | 2026-01-03 02:58:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:58:58.864858 | orchestrator | 2026-01-03 02:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:01.912022 | orchestrator | 2026-01-03 02:59:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:01.912753 | orchestrator | 2026-01-03 02:59:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:01.912802 | orchestrator | 2026-01-03 02:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:04.960731 | orchestrator | 2026-01-03 02:59:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:04.962515 | orchestrator | 2026-01-03 02:59:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:04.962678 | orchestrator | 2026-01-03 02:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:08.012292 | orchestrator | 2026-01-03 02:59:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:08.014122 | orchestrator | 2026-01-03 02:59:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:08.014214 | orchestrator | 2026-01-03 02:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:11.054709 | orchestrator | 2026-01-03 02:59:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:11.054907 | orchestrator | 2026-01-03 02:59:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:11.054949 | orchestrator | 2026-01-03 02:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:14.092586 | orchestrator | 2026-01-03 02:59:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:14.093591 | orchestrator | 2026-01-03 02:59:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:14.093627 | orchestrator | 2026-01-03 02:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:17.139989 | orchestrator | 2026-01-03 02:59:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:17.142728 | orchestrator | 2026-01-03 02:59:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:17.142831 | orchestrator | 2026-01-03 02:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:20.190900 | orchestrator | 2026-01-03 02:59:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:20.193131 | orchestrator | 2026-01-03 02:59:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:20.193207 | orchestrator | 2026-01-03 02:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:23.244544 | orchestrator | 2026-01-03 02:59:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:23.245729 | orchestrator | 2026-01-03 02:59:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:23.245809 | orchestrator | 2026-01-03 02:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:26.292004 | orchestrator | 2026-01-03 02:59:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:26.295853 | orchestrator | 2026-01-03 02:59:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:26.295950 | orchestrator | 2026-01-03 02:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:29.345759 | orchestrator | 2026-01-03 02:59:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:29.350244 | orchestrator | 2026-01-03 02:59:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:29.350341 | orchestrator | 2026-01-03 02:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:32.394886 | orchestrator | 2026-01-03 02:59:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:32.397814 | orchestrator | 2026-01-03 02:59:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:32.397960 | orchestrator | 2026-01-03 02:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:35.441175 | orchestrator | 2026-01-03 02:59:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:35.443046 | orchestrator | 2026-01-03 02:59:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:35.443123 | orchestrator | 2026-01-03 02:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:38.490923 | orchestrator | 2026-01-03 02:59:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:38.493197 | orchestrator | 2026-01-03 02:59:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:38.493727 | orchestrator | 2026-01-03 02:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:41.538795 | orchestrator | 2026-01-03 02:59:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:41.540548 | orchestrator | 2026-01-03 02:59:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:41.540606 | orchestrator | 2026-01-03 02:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:44.585986 | orchestrator | 2026-01-03 02:59:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:44.588647 | orchestrator | 2026-01-03 02:59:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:44.588754 | orchestrator | 2026-01-03 02:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:47.629195 | orchestrator | 2026-01-03 02:59:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:47.630315 | orchestrator | 2026-01-03 02:59:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:47.630353 | orchestrator | 2026-01-03 02:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:50.677244 | orchestrator | 2026-01-03 02:59:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:50.679601 | orchestrator | 2026-01-03 02:59:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:50.679659 | orchestrator | 2026-01-03 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:53.726340 | orchestrator | 2026-01-03 02:59:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:53.728356 | orchestrator | 2026-01-03 02:59:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:53.728415 | orchestrator | 2026-01-03 02:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:56.772071 | orchestrator | 2026-01-03 02:59:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:56.774645 | orchestrator | 2026-01-03 02:59:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:56.774789 | orchestrator | 2026-01-03 02:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:59.819331 | orchestrator | 2026-01-03 02:59:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 02:59:59.821602 | orchestrator | 2026-01-03 02:59:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 02:59:59.821670 | orchestrator | 2026-01-03 02:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:02.866200 | orchestrator | 2026-01-03 03:00:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:02.868224 | orchestrator | 2026-01-03 03:00:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:02.868323 | orchestrator | 2026-01-03 03:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:05.915633 | orchestrator | 2026-01-03 03:00:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:05.917004 | orchestrator | 2026-01-03 03:00:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:05.917055 | orchestrator | 2026-01-03 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:08.966713 | orchestrator | 2026-01-03 03:00:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:08.968616 | orchestrator | 2026-01-03 03:00:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:08.968668 | orchestrator | 2026-01-03 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:12.010668 | orchestrator | 2026-01-03 03:00:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:12.013481 | orchestrator | 2026-01-03 03:00:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:12.013584 | orchestrator | 2026-01-03 03:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:15.060485 | orchestrator | 2026-01-03 03:00:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:15.061175 | orchestrator | 2026-01-03 03:00:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:15.061247 | orchestrator | 2026-01-03 03:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:18.106657 | orchestrator | 2026-01-03 03:00:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:18.108691 | orchestrator | 2026-01-03 03:00:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:18.108759 | orchestrator | 2026-01-03 03:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:21.164638 | orchestrator | 2026-01-03 03:00:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:21.164733 | orchestrator | 2026-01-03 03:00:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:21.164745 | orchestrator | 2026-01-03 03:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:24.200104 | orchestrator | 2026-01-03 03:00:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:24.201673 | orchestrator | 2026-01-03 03:00:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:24.201719 | orchestrator | 2026-01-03 03:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:27.241428 | orchestrator | 2026-01-03 03:00:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:27.242476 | orchestrator | 2026-01-03 03:00:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:27.242515 | orchestrator | 2026-01-03 03:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:30.285805 | orchestrator | 2026-01-03 03:00:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:30.285898 | orchestrator | 2026-01-03 03:00:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:30.285908 | orchestrator | 2026-01-03 03:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:33.335874 | orchestrator | 2026-01-03 03:00:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:33.339202 | orchestrator | 2026-01-03 03:00:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:33.339257 | orchestrator | 2026-01-03 03:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:36.383404 | orchestrator | 2026-01-03 03:00:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:36.384310 | orchestrator | 2026-01-03 03:00:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:36.384429 | orchestrator | 2026-01-03 03:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:39.431700 | orchestrator | 2026-01-03 03:00:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:39.433167 | orchestrator | 2026-01-03 03:00:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:39.433206 | orchestrator | 2026-01-03 03:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:42.479254 | orchestrator | 2026-01-03 03:00:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:42.481055 | orchestrator | 2026-01-03 03:00:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:42.481094 | orchestrator | 2026-01-03 03:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:45.524228 | orchestrator | 2026-01-03 03:00:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:45.525671 | orchestrator | 2026-01-03 03:00:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:45.525774 | orchestrator | 2026-01-03 03:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:48.574474 | orchestrator | 2026-01-03 03:00:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:48.574603 | orchestrator | 2026-01-03 03:00:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:48.574615 | orchestrator | 2026-01-03 03:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:51.624114 | orchestrator | 2026-01-03 03:00:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:51.624205 | orchestrator | 2026-01-03 03:00:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:51.624219 | orchestrator | 2026-01-03 03:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:54.656972 | orchestrator | 2026-01-03 03:00:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:54.658177 | orchestrator | 2026-01-03 03:00:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:54.658231 | orchestrator | 2026-01-03 03:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:57.702889 | orchestrator | 2026-01-03 03:00:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:00:57.705225 | orchestrator | 2026-01-03 03:00:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:00:57.705310 | orchestrator | 2026-01-03 03:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:00.748874 | orchestrator | 2026-01-03 03:01:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:00.750729 | orchestrator | 2026-01-03 03:01:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:00.750864 | orchestrator | 2026-01-03 03:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:03.795612 | orchestrator | 2026-01-03 03:01:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:03.797028 | orchestrator | 2026-01-03 03:01:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:03.797094 | orchestrator | 2026-01-03 03:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:06.840927 | orchestrator | 2026-01-03 03:01:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:06.843167 | orchestrator | 2026-01-03 03:01:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:06.843228 | orchestrator | 2026-01-03 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:09.885893 | orchestrator | 2026-01-03 03:01:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:09.887232 | orchestrator | 2026-01-03 03:01:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:09.887743 | orchestrator | 2026-01-03 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:12.930270 | orchestrator | 2026-01-03 03:01:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:12.932206 | orchestrator | 2026-01-03 03:01:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:12.932416 | orchestrator | 2026-01-03 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:15.971994 | orchestrator | 2026-01-03 03:01:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:15.974185 | orchestrator | 2026-01-03 03:01:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:15.974238 | orchestrator | 2026-01-03 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:19.022989 | orchestrator | 2026-01-03 03:01:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:19.025246 | orchestrator | 2026-01-03 03:01:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:19.025314 | orchestrator | 2026-01-03 03:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:22.073876 | orchestrator | 2026-01-03 03:01:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:22.075011 | orchestrator | 2026-01-03 03:01:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:22.075069 | orchestrator | 2026-01-03 03:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:25.122574 | orchestrator | 2026-01-03 03:01:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:25.123465 | orchestrator | 2026-01-03 03:01:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:25.123547 | orchestrator | 2026-01-03 03:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:28.173330 | orchestrator | 2026-01-03 03:01:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:28.174123 | orchestrator | 2026-01-03 03:01:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:28.174347 | orchestrator | 2026-01-03 03:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:31.223080 | orchestrator | 2026-01-03 03:01:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:31.225374 | orchestrator | 2026-01-03 03:01:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:31.225450 | orchestrator | 2026-01-03 03:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:34.268492 | orchestrator | 2026-01-03 03:01:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:34.270145 | orchestrator | 2026-01-03 03:01:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:34.270503 | orchestrator | 2026-01-03 03:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:37.315346 | orchestrator | 2026-01-03 03:01:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:37.315436 | orchestrator | 2026-01-03 03:01:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:37.315445 | orchestrator | 2026-01-03 03:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:40.362997 | orchestrator | 2026-01-03 03:01:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:40.365005 | orchestrator | 2026-01-03 03:01:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:40.365129 | orchestrator | 2026-01-03 03:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:43.411873 | orchestrator | 2026-01-03 03:01:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:43.413377 | orchestrator | 2026-01-03 03:01:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:43.413420 | orchestrator | 2026-01-03 03:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:46.454863 | orchestrator | 2026-01-03 03:01:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:46.456090 | orchestrator | 2026-01-03 03:01:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:46.456131 | orchestrator | 2026-01-03 03:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:49.502254 | orchestrator | 2026-01-03 03:01:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:49.505341 | orchestrator | 2026-01-03 03:01:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:49.505439 | orchestrator | 2026-01-03 03:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:52.548892 | orchestrator | 2026-01-03 03:01:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:52.550487 | orchestrator | 2026-01-03 03:01:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:52.550553 | orchestrator | 2026-01-03 03:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:55.599602 | orchestrator | 2026-01-03 03:01:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:55.600931 | orchestrator | 2026-01-03 03:01:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:55.601043 | orchestrator | 2026-01-03 03:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:58.649237 | orchestrator | 2026-01-03 03:01:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:01:58.651335 | orchestrator | 2026-01-03 03:01:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:01:58.651400 | orchestrator | 2026-01-03 03:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:01.692980 | orchestrator | 2026-01-03 03:02:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:01.693767 | orchestrator | 2026-01-03 03:02:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:01.693821 | orchestrator | 2026-01-03 03:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:04.739916 | orchestrator | 2026-01-03 03:02:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:04.741357 | orchestrator | 2026-01-03 03:02:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:04.741439 | orchestrator | 2026-01-03 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:07.784108 | orchestrator | 2026-01-03 03:02:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:07.784872 | orchestrator | 2026-01-03 03:02:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:07.784948 | orchestrator | 2026-01-03 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:10.831145 | orchestrator | 2026-01-03 03:02:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:10.832124 | orchestrator | 2026-01-03 03:02:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:10.832220 | orchestrator | 2026-01-03 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:13.871849 | orchestrator | 2026-01-03 03:02:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:13.873188 | orchestrator | 2026-01-03 03:02:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:13.873233 | orchestrator | 2026-01-03 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:16.917016 | orchestrator | 2026-01-03 03:02:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:16.919150 | orchestrator | 2026-01-03 03:02:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:16.919263 | orchestrator | 2026-01-03 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:19.966369 | orchestrator | 2026-01-03 03:02:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:19.967109 | orchestrator | 2026-01-03 03:02:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:19.967151 | orchestrator | 2026-01-03 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:23.025151 | orchestrator | 2026-01-03 03:02:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:23.025282 | orchestrator | 2026-01-03 03:02:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:23.025295 | orchestrator | 2026-01-03 03:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:26.069072 | orchestrator | 2026-01-03 03:02:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:26.071348 | orchestrator | 2026-01-03 03:02:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:26.071416 | orchestrator | 2026-01-03 03:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:29.109423 | orchestrator | 2026-01-03 03:02:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:29.111677 | orchestrator | 2026-01-03 03:02:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:29.111762 | orchestrator | 2026-01-03 03:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:32.155675 | orchestrator | 2026-01-03 03:02:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:32.158123 | orchestrator | 2026-01-03 03:02:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:32.158276 | orchestrator | 2026-01-03 03:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:35.204169 | orchestrator | 2026-01-03 03:02:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:35.206278 | orchestrator | 2026-01-03 03:02:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:35.206358 | orchestrator | 2026-01-03 03:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:38.257783 | orchestrator | 2026-01-03 03:02:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:38.258778 | orchestrator | 2026-01-03 03:02:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:38.258831 | orchestrator | 2026-01-03 03:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:41.304443 | orchestrator | 2026-01-03 03:02:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:41.307155 | orchestrator | 2026-01-03 03:02:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:41.307288 | orchestrator | 2026-01-03 03:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:44.346937 | orchestrator | 2026-01-03 03:02:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:44.348963 | orchestrator | 2026-01-03 03:02:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:44.349217 | orchestrator | 2026-01-03 03:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:47.394093 | orchestrator | 2026-01-03 03:02:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:47.395606 | orchestrator | 2026-01-03 03:02:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:47.395669 | orchestrator | 2026-01-03 03:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:50.441210 | orchestrator | 2026-01-03 03:02:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:50.443715 | orchestrator | 2026-01-03 03:02:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:50.443847 | orchestrator | 2026-01-03 03:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:53.494706 | orchestrator | 2026-01-03 03:02:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:53.496211 | orchestrator | 2026-01-03 03:02:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:53.496263 | orchestrator | 2026-01-03 03:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:56.534773 | orchestrator | 2026-01-03 03:02:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:56.536009 | orchestrator | 2026-01-03 03:02:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:56.536040 | orchestrator | 2026-01-03 03:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:59.581130 | orchestrator | 2026-01-03 03:02:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:02:59.582418 | orchestrator | 2026-01-03 03:02:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:02:59.582494 | orchestrator | 2026-01-03 03:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:02.628779 | orchestrator | 2026-01-03 03:03:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:02.630604 | orchestrator | 2026-01-03 03:03:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:02.630664 | orchestrator | 2026-01-03 03:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:05.675653 | orchestrator | 2026-01-03 03:03:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:05.676627 | orchestrator | 2026-01-03 03:03:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:05.676736 | orchestrator | 2026-01-03 03:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:08.722278 | orchestrator | 2026-01-03 03:03:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:08.724227 | orchestrator | 2026-01-03 03:03:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:08.724359 | orchestrator | 2026-01-03 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:11.774244 | orchestrator | 2026-01-03 03:03:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:11.775596 | orchestrator | 2026-01-03 03:03:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:11.775660 | orchestrator | 2026-01-03 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:14.818888 | orchestrator | 2026-01-03 03:03:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:14.819694 | orchestrator | 2026-01-03 03:03:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:14.819746 | orchestrator | 2026-01-03 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:17.862422 | orchestrator | 2026-01-03 03:03:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:17.863660 | orchestrator | 2026-01-03 03:03:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:17.863695 | orchestrator | 2026-01-03 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:20.907157 | orchestrator | 2026-01-03 03:03:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:20.908793 | orchestrator | 2026-01-03 03:03:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:20.908850 | orchestrator | 2026-01-03 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:23.951670 | orchestrator | 2026-01-03 03:03:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:23.952748 | orchestrator | 2026-01-03 03:03:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:23.952841 | orchestrator | 2026-01-03 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:27.002636 | orchestrator | 2026-01-03 03:03:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:27.004653 | orchestrator | 2026-01-03 03:03:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:27.004719 | orchestrator | 2026-01-03 03:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:30.049408 | orchestrator | 2026-01-03 03:03:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:30.050174 | orchestrator | 2026-01-03 03:03:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:30.050198 | orchestrator | 2026-01-03 03:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:33.088696 | orchestrator | 2026-01-03 03:03:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:33.089139 | orchestrator | 2026-01-03 03:03:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:33.089161 | orchestrator | 2026-01-03 03:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:36.133371 | orchestrator | 2026-01-03 03:03:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:36.135558 | orchestrator | 2026-01-03 03:03:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:36.135602 | orchestrator | 2026-01-03 03:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:39.178617 | orchestrator | 2026-01-03 03:03:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:39.182869 | orchestrator | 2026-01-03 03:03:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:39.182976 | orchestrator | 2026-01-03 03:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:42.224529 | orchestrator | 2026-01-03 03:03:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:42.227279 | orchestrator | 2026-01-03 03:03:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:42.227360 | orchestrator | 2026-01-03 03:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:45.274172 | orchestrator | 2026-01-03 03:03:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:45.274425 | orchestrator | 2026-01-03 03:03:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:45.274484 | orchestrator | 2026-01-03 03:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:48.311924 | orchestrator | 2026-01-03 03:03:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:48.313267 | orchestrator | 2026-01-03 03:03:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:48.313317 | orchestrator | 2026-01-03 03:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:51.360565 | orchestrator | 2026-01-03 03:03:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:51.361713 | orchestrator | 2026-01-03 03:03:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:51.361747 | orchestrator | 2026-01-03 03:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:54.407051 | orchestrator | 2026-01-03 03:03:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:54.411123 | orchestrator | 2026-01-03 03:03:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:54.411180 | orchestrator | 2026-01-03 03:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:57.465048 | orchestrator | 2026-01-03 03:03:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:03:57.466764 | orchestrator | 2026-01-03 03:03:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:03:57.467304 | orchestrator | 2026-01-03 03:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:00.510553 | orchestrator | 2026-01-03 03:04:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:00.511777 | orchestrator | 2026-01-03 03:04:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:00.511839 | orchestrator | 2026-01-03 03:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:03.553021 | orchestrator | 2026-01-03 03:04:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:03.554788 | orchestrator | 2026-01-03 03:04:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:03.554874 | orchestrator | 2026-01-03 03:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:06.601721 | orchestrator | 2026-01-03 03:04:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:06.603551 | orchestrator | 2026-01-03 03:04:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:06.603592 | orchestrator | 2026-01-03 03:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:09.652316 | orchestrator | 2026-01-03 03:04:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:09.654087 | orchestrator | 2026-01-03 03:04:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:09.654185 | orchestrator | 2026-01-03 03:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:12.706585 | orchestrator | 2026-01-03 03:04:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:12.707270 | orchestrator | 2026-01-03 03:04:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:12.707436 | orchestrator | 2026-01-03 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:15.752443 | orchestrator | 2026-01-03 03:04:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:15.753792 | orchestrator | 2026-01-03 03:04:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:15.753853 | orchestrator | 2026-01-03 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:18.798266 | orchestrator | 2026-01-03 03:04:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:18.799973 | orchestrator | 2026-01-03 03:04:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:18.800350 | orchestrator | 2026-01-03 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:21.848120 | orchestrator | 2026-01-03 03:04:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:21.850214 | orchestrator | 2026-01-03 03:04:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:21.850318 | orchestrator | 2026-01-03 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:24.897903 | orchestrator | 2026-01-03 03:04:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:24.900130 | orchestrator | 2026-01-03 03:04:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:24.900199 | orchestrator | 2026-01-03 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:27.954845 | orchestrator | 2026-01-03 03:04:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:27.955624 | orchestrator | 2026-01-03 03:04:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:27.955967 | orchestrator | 2026-01-03 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:30.997571 | orchestrator | 2026-01-03 03:04:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:30.998700 | orchestrator | 2026-01-03 03:04:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:30.998754 | orchestrator | 2026-01-03 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:34.041853 | orchestrator | 2026-01-03 03:04:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:34.044131 | orchestrator | 2026-01-03 03:04:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:34.044198 | orchestrator | 2026-01-03 03:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:37.090851 | orchestrator | 2026-01-03 03:04:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:37.092840 | orchestrator | 2026-01-03 03:04:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:37.092907 | orchestrator | 2026-01-03 03:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:40.137402 | orchestrator | 2026-01-03 03:04:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:40.139103 | orchestrator | 2026-01-03 03:04:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:40.139287 | orchestrator | 2026-01-03 03:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:43.183771 | orchestrator | 2026-01-03 03:04:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:43.184530 | orchestrator | 2026-01-03 03:04:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:43.184630 | orchestrator | 2026-01-03 03:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:46.228082 | orchestrator | 2026-01-03 03:04:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:46.230318 | orchestrator | 2026-01-03 03:04:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:46.230374 | orchestrator | 2026-01-03 03:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:49.275150 | orchestrator | 2026-01-03 03:04:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:49.277097 | orchestrator | 2026-01-03 03:04:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:49.278428 | orchestrator | 2026-01-03 03:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:52.322605 | orchestrator | 2026-01-03 03:04:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:52.324366 | orchestrator | 2026-01-03 03:04:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:52.324412 | orchestrator | 2026-01-03 03:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:55.371766 | orchestrator | 2026-01-03 03:04:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:55.372880 | orchestrator | 2026-01-03 03:04:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:55.372963 | orchestrator | 2026-01-03 03:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:58.423977 | orchestrator | 2026-01-03 03:04:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:04:58.425258 | orchestrator | 2026-01-03 03:04:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:04:58.425309 | orchestrator | 2026-01-03 03:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:01.470238 | orchestrator | 2026-01-03 03:05:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:01.471156 | orchestrator | 2026-01-03 03:05:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:01.471418 | orchestrator | 2026-01-03 03:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:04.518565 | orchestrator | 2026-01-03 03:05:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:04.520879 | orchestrator | 2026-01-03 03:05:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:04.520996 | orchestrator | 2026-01-03 03:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:07.567008 | orchestrator | 2026-01-03 03:05:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:07.568710 | orchestrator | 2026-01-03 03:05:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:07.568773 | orchestrator | 2026-01-03 03:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:10.616590 | orchestrator | 2026-01-03 03:05:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:10.618317 | orchestrator | 2026-01-03 03:05:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:10.618355 | orchestrator | 2026-01-03 03:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:13.665627 | orchestrator | 2026-01-03 03:05:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:13.666557 | orchestrator | 2026-01-03 03:05:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:13.666604 | orchestrator | 2026-01-03 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:16.707568 | orchestrator | 2026-01-03 03:05:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:16.709035 | orchestrator | 2026-01-03 03:05:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:16.709088 | orchestrator | 2026-01-03 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:19.754691 | orchestrator | 2026-01-03 03:05:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:19.756739 | orchestrator | 2026-01-03 03:05:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:19.756801 | orchestrator | 2026-01-03 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:22.801649 | orchestrator | 2026-01-03 03:05:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:22.805291 | orchestrator | 2026-01-03 03:05:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:22.805391 | orchestrator | 2026-01-03 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:25.853555 | orchestrator | 2026-01-03 03:05:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:25.855735 | orchestrator | 2026-01-03 03:05:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:25.855811 | orchestrator | 2026-01-03 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:28.897963 | orchestrator | 2026-01-03 03:05:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:28.899368 | orchestrator | 2026-01-03 03:05:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:28.899425 | orchestrator | 2026-01-03 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:31.947578 | orchestrator | 2026-01-03 03:05:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:31.949710 | orchestrator | 2026-01-03 03:05:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:31.949788 | orchestrator | 2026-01-03 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:34.994563 | orchestrator | 2026-01-03 03:05:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:34.996298 | orchestrator | 2026-01-03 03:05:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:34.996347 | orchestrator | 2026-01-03 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:38.041040 | orchestrator | 2026-01-03 03:05:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:38.043322 | orchestrator | 2026-01-03 03:05:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:38.043404 | orchestrator | 2026-01-03 03:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:41.088489 | orchestrator | 2026-01-03 03:05:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:41.089695 | orchestrator | 2026-01-03 03:05:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:41.089750 | orchestrator | 2026-01-03 03:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:44.136116 | orchestrator | 2026-01-03 03:05:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:44.137984 | orchestrator | 2026-01-03 03:05:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:44.138056 | orchestrator | 2026-01-03 03:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:47.178275 | orchestrator | 2026-01-03 03:05:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:47.179123 | orchestrator | 2026-01-03 03:05:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:47.179162 | orchestrator | 2026-01-03 03:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:50.223810 | orchestrator | 2026-01-03 03:05:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:50.226649 | orchestrator | 2026-01-03 03:05:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:50.227303 | orchestrator | 2026-01-03 03:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:53.270263 | orchestrator | 2026-01-03 03:05:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:53.272668 | orchestrator | 2026-01-03 03:05:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:53.272789 | orchestrator | 2026-01-03 03:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:56.318703 | orchestrator | 2026-01-03 03:05:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:56.320025 | orchestrator | 2026-01-03 03:05:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:56.320089 | orchestrator | 2026-01-03 03:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:59.367378 | orchestrator | 2026-01-03 03:05:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:05:59.368665 | orchestrator | 2026-01-03 03:05:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:05:59.368894 | orchestrator | 2026-01-03 03:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:02.415002 | orchestrator | 2026-01-03 03:06:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:02.416686 | orchestrator | 2026-01-03 03:06:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:02.416746 | orchestrator | 2026-01-03 03:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:05.459077 | orchestrator | 2026-01-03 03:06:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:05.459763 | orchestrator | 2026-01-03 03:06:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:05.459814 | orchestrator | 2026-01-03 03:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:08.508423 | orchestrator | 2026-01-03 03:06:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:08.510217 | orchestrator | 2026-01-03 03:06:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:08.510329 | orchestrator | 2026-01-03 03:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:11.556493 | orchestrator | 2026-01-03 03:06:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:11.558472 | orchestrator | 2026-01-03 03:06:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:11.558526 | orchestrator | 2026-01-03 03:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:14.604144 | orchestrator | 2026-01-03 03:06:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:14.605492 | orchestrator | 2026-01-03 03:06:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:14.605552 | orchestrator | 2026-01-03 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:17.648666 | orchestrator | 2026-01-03 03:06:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:17.649828 | orchestrator | 2026-01-03 03:06:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:17.649893 | orchestrator | 2026-01-03 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:20.694866 | orchestrator | 2026-01-03 03:06:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:20.696073 | orchestrator | 2026-01-03 03:06:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:20.696151 | orchestrator | 2026-01-03 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:23.743348 | orchestrator | 2026-01-03 03:06:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:23.744682 | orchestrator | 2026-01-03 03:06:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:23.744723 | orchestrator | 2026-01-03 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:26.793886 | orchestrator | 2026-01-03 03:06:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:26.795473 | orchestrator | 2026-01-03 03:06:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:26.795552 | orchestrator | 2026-01-03 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:29.838785 | orchestrator | 2026-01-03 03:06:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:29.840798 | orchestrator | 2026-01-03 03:06:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:29.840872 | orchestrator | 2026-01-03 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:32.884444 | orchestrator | 2026-01-03 03:06:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:32.885480 | orchestrator | 2026-01-03 03:06:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:32.885552 | orchestrator | 2026-01-03 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:35.930778 | orchestrator | 2026-01-03 03:06:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:35.933545 | orchestrator | 2026-01-03 03:06:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:35.933606 | orchestrator | 2026-01-03 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:38.986287 | orchestrator | 2026-01-03 03:06:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:38.986403 | orchestrator | 2026-01-03 03:06:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:38.986421 | orchestrator | 2026-01-03 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:42.031281 | orchestrator | 2026-01-03 03:06:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:42.033231 | orchestrator | 2026-01-03 03:06:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:42.033287 | orchestrator | 2026-01-03 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:45.079747 | orchestrator | 2026-01-03 03:06:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:45.079883 | orchestrator | 2026-01-03 03:06:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:45.079912 | orchestrator | 2026-01-03 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:48.135366 | orchestrator | 2026-01-03 03:06:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:48.137038 | orchestrator | 2026-01-03 03:06:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:48.137116 | orchestrator | 2026-01-03 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:51.189127 | orchestrator | 2026-01-03 03:06:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:51.191380 | orchestrator | 2026-01-03 03:06:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:51.191472 | orchestrator | 2026-01-03 03:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:54.232767 | orchestrator | 2026-01-03 03:06:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:54.234233 | orchestrator | 2026-01-03 03:06:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:54.234294 | orchestrator | 2026-01-03 03:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:57.285432 | orchestrator | 2026-01-03 03:06:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:06:57.287449 | orchestrator | 2026-01-03 03:06:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:06:57.287598 | orchestrator | 2026-01-03 03:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:00.335069 | orchestrator | 2026-01-03 03:07:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:00.337415 | orchestrator | 2026-01-03 03:07:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:00.337476 | orchestrator | 2026-01-03 03:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:03.388580 | orchestrator | 2026-01-03 03:07:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:03.390396 | orchestrator | 2026-01-03 03:07:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:03.390455 | orchestrator | 2026-01-03 03:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:06.434421 | orchestrator | 2026-01-03 03:07:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:06.436105 | orchestrator | 2026-01-03 03:07:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:06.436210 | orchestrator | 2026-01-03 03:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:09.486395 | orchestrator | 2026-01-03 03:07:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:09.488037 | orchestrator | 2026-01-03 03:07:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:09.488097 | orchestrator | 2026-01-03 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:12.537620 | orchestrator | 2026-01-03 03:07:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:12.540444 | orchestrator | 2026-01-03 03:07:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:12.540800 | orchestrator | 2026-01-03 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:15.587840 | orchestrator | 2026-01-03 03:07:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:15.589660 | orchestrator | 2026-01-03 03:07:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:15.589721 | orchestrator | 2026-01-03 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:18.634221 | orchestrator | 2026-01-03 03:07:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:18.635199 | orchestrator | 2026-01-03 03:07:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:18.635246 | orchestrator | 2026-01-03 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:21.689862 | orchestrator | 2026-01-03 03:07:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:21.691468 | orchestrator | 2026-01-03 03:07:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:21.691505 | orchestrator | 2026-01-03 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:24.742795 | orchestrator | 2026-01-03 03:07:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:24.746950 | orchestrator | 2026-01-03 03:07:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:24.747049 | orchestrator | 2026-01-03 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:27.795599 | orchestrator | 2026-01-03 03:07:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:27.797563 | orchestrator | 2026-01-03 03:07:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:27.797604 | orchestrator | 2026-01-03 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:30.846661 | orchestrator | 2026-01-03 03:07:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:30.849496 | orchestrator | 2026-01-03 03:07:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:30.849601 | orchestrator | 2026-01-03 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:33.899615 | orchestrator | 2026-01-03 03:07:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:33.901623 | orchestrator | 2026-01-03 03:07:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:33.901915 | orchestrator | 2026-01-03 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:36.948945 | orchestrator | 2026-01-03 03:07:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:36.950360 | orchestrator | 2026-01-03 03:07:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:36.950906 | orchestrator | 2026-01-03 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:39.995347 | orchestrator | 2026-01-03 03:07:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:39.996629 | orchestrator | 2026-01-03 03:07:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:39.996670 | orchestrator | 2026-01-03 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:43.041081 | orchestrator | 2026-01-03 03:07:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:43.042899 | orchestrator | 2026-01-03 03:07:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:43.042954 | orchestrator | 2026-01-03 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:46.084996 | orchestrator | 2026-01-03 03:07:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:46.086333 | orchestrator | 2026-01-03 03:07:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:46.086376 | orchestrator | 2026-01-03 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:49.137082 | orchestrator | 2026-01-03 03:07:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:49.140002 | orchestrator | 2026-01-03 03:07:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:49.140087 | orchestrator | 2026-01-03 03:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:52.185969 | orchestrator | 2026-01-03 03:07:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:52.190171 | orchestrator | 2026-01-03 03:07:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:52.190229 | orchestrator | 2026-01-03 03:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:55.231249 | orchestrator | 2026-01-03 03:07:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:55.231887 | orchestrator | 2026-01-03 03:07:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:55.231914 | orchestrator | 2026-01-03 03:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:58.285880 | orchestrator | 2026-01-03 03:07:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:07:58.286000 | orchestrator | 2026-01-03 03:07:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:07:58.286060 | orchestrator | 2026-01-03 03:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:01.331639 | orchestrator | 2026-01-03 03:08:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:01.331751 | orchestrator | 2026-01-03 03:08:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:01.331772 | orchestrator | 2026-01-03 03:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:04.375751 | orchestrator | 2026-01-03 03:08:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:04.377300 | orchestrator | 2026-01-03 03:08:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:04.377393 | orchestrator | 2026-01-03 03:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:07.422957 | orchestrator | 2026-01-03 03:08:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:07.425275 | orchestrator | 2026-01-03 03:08:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:07.425354 | orchestrator | 2026-01-03 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:10.464037 | orchestrator | 2026-01-03 03:08:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:10.465931 | orchestrator | 2026-01-03 03:08:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:10.465997 | orchestrator | 2026-01-03 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:13.513439 | orchestrator | 2026-01-03 03:08:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:13.515355 | orchestrator | 2026-01-03 03:08:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:13.515386 | orchestrator | 2026-01-03 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:16.560717 | orchestrator | 2026-01-03 03:08:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:16.561108 | orchestrator | 2026-01-03 03:08:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:16.561140 | orchestrator | 2026-01-03 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:19.610102 | orchestrator | 2026-01-03 03:08:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:19.611838 | orchestrator | 2026-01-03 03:08:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:19.611896 | orchestrator | 2026-01-03 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:22.658477 | orchestrator | 2026-01-03 03:08:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:22.659678 | orchestrator | 2026-01-03 03:08:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:22.659722 | orchestrator | 2026-01-03 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:25.710802 | orchestrator | 2026-01-03 03:08:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:25.712243 | orchestrator | 2026-01-03 03:08:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:25.712290 | orchestrator | 2026-01-03 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:28.760391 | orchestrator | 2026-01-03 03:08:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:28.763087 | orchestrator | 2026-01-03 03:08:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:28.763161 | orchestrator | 2026-01-03 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:31.811882 | orchestrator | 2026-01-03 03:08:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:31.814519 | orchestrator | 2026-01-03 03:08:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:31.814589 | orchestrator | 2026-01-03 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:34.858626 | orchestrator | 2026-01-03 03:08:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:34.860268 | orchestrator | 2026-01-03 03:08:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:34.860325 | orchestrator | 2026-01-03 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:37.899718 | orchestrator | 2026-01-03 03:08:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:37.902308 | orchestrator | 2026-01-03 03:08:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:37.902397 | orchestrator | 2026-01-03 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:40.944745 | orchestrator | 2026-01-03 03:08:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:40.946091 | orchestrator | 2026-01-03 03:08:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:40.946168 | orchestrator | 2026-01-03 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:43.991895 | orchestrator | 2026-01-03 03:08:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:43.994174 | orchestrator | 2026-01-03 03:08:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:43.994242 | orchestrator | 2026-01-03 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:47.043527 | orchestrator | 2026-01-03 03:08:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:47.045235 | orchestrator | 2026-01-03 03:08:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:47.045281 | orchestrator | 2026-01-03 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:50.093409 | orchestrator | 2026-01-03 03:08:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:50.095049 | orchestrator | 2026-01-03 03:08:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:50.095134 | orchestrator | 2026-01-03 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:53.138131 | orchestrator | 2026-01-03 03:08:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:53.138438 | orchestrator | 2026-01-03 03:08:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:53.138457 | orchestrator | 2026-01-03 03:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:56.189591 | orchestrator | 2026-01-03 03:08:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:56.192292 | orchestrator | 2026-01-03 03:08:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:56.192406 | orchestrator | 2026-01-03 03:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:59.243871 | orchestrator | 2026-01-03 03:08:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:08:59.247388 | orchestrator | 2026-01-03 03:08:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:08:59.247454 | orchestrator | 2026-01-03 03:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:02.295331 | orchestrator | 2026-01-03 03:09:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:02.297108 | orchestrator | 2026-01-03 03:09:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:02.297138 | orchestrator | 2026-01-03 03:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:05.343003 | orchestrator | 2026-01-03 03:09:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:05.344396 | orchestrator | 2026-01-03 03:09:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:05.344438 | orchestrator | 2026-01-03 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:08.385497 | orchestrator | 2026-01-03 03:09:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:08.386874 | orchestrator | 2026-01-03 03:09:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:08.387034 | orchestrator | 2026-01-03 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:11.435218 | orchestrator | 2026-01-03 03:09:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:11.437866 | orchestrator | 2026-01-03 03:09:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:11.437929 | orchestrator | 2026-01-03 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:14.474078 | orchestrator | 2026-01-03 03:09:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:14.475758 | orchestrator | 2026-01-03 03:09:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:14.475801 | orchestrator | 2026-01-03 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:17.514888 | orchestrator | 2026-01-03 03:09:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:17.516078 | orchestrator | 2026-01-03 03:09:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:17.516216 | orchestrator | 2026-01-03 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:20.564760 | orchestrator | 2026-01-03 03:09:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:20.566010 | orchestrator | 2026-01-03 03:09:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:20.566098 | orchestrator | 2026-01-03 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:23.608073 | orchestrator | 2026-01-03 03:09:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:23.609799 | orchestrator | 2026-01-03 03:09:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:23.609876 | orchestrator | 2026-01-03 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:26.655526 | orchestrator | 2026-01-03 03:09:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:26.656991 | orchestrator | 2026-01-03 03:09:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:26.657058 | orchestrator | 2026-01-03 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:29.705737 | orchestrator | 2026-01-03 03:09:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:29.707426 | orchestrator | 2026-01-03 03:09:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:29.707514 | orchestrator | 2026-01-03 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:32.749265 | orchestrator | 2026-01-03 03:09:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:32.751205 | orchestrator | 2026-01-03 03:09:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:32.751254 | orchestrator | 2026-01-03 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:35.801988 | orchestrator | 2026-01-03 03:09:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:35.803158 | orchestrator | 2026-01-03 03:09:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:35.803194 | orchestrator | 2026-01-03 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:38.845941 | orchestrator | 2026-01-03 03:09:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:38.847903 | orchestrator | 2026-01-03 03:09:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:38.847983 | orchestrator | 2026-01-03 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:41.892094 | orchestrator | 2026-01-03 03:09:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:41.894620 | orchestrator | 2026-01-03 03:09:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:41.894674 | orchestrator | 2026-01-03 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:44.940058 | orchestrator | 2026-01-03 03:09:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:44.942725 | orchestrator | 2026-01-03 03:09:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:44.943201 | orchestrator | 2026-01-03 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:47.985413 | orchestrator | 2026-01-03 03:09:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:47.986982 | orchestrator | 2026-01-03 03:09:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:47.987027 | orchestrator | 2026-01-03 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:51.029633 | orchestrator | 2026-01-03 03:09:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:51.029845 | orchestrator | 2026-01-03 03:09:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:51.029862 | orchestrator | 2026-01-03 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:54.073158 | orchestrator | 2026-01-03 03:09:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:54.075269 | orchestrator | 2026-01-03 03:09:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:54.075330 | orchestrator | 2026-01-03 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:57.123496 | orchestrator | 2026-01-03 03:09:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:09:57.124691 | orchestrator | 2026-01-03 03:09:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:09:57.124741 | orchestrator | 2026-01-03 03:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:00.174674 | orchestrator | 2026-01-03 03:10:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:00.174749 | orchestrator | 2026-01-03 03:10:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:00.174757 | orchestrator | 2026-01-03 03:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:03.211244 | orchestrator | 2026-01-03 03:10:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:03.212533 | orchestrator | 2026-01-03 03:10:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:03.212619 | orchestrator | 2026-01-03 03:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:06.262589 | orchestrator | 2026-01-03 03:10:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:06.263635 | orchestrator | 2026-01-03 03:10:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:06.263670 | orchestrator | 2026-01-03 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:09.307752 | orchestrator | 2026-01-03 03:10:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:09.309309 | orchestrator | 2026-01-03 03:10:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:09.309350 | orchestrator | 2026-01-03 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:12.354957 | orchestrator | 2026-01-03 03:10:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:12.356115 | orchestrator | 2026-01-03 03:10:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:12.356170 | orchestrator | 2026-01-03 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:15.400689 | orchestrator | 2026-01-03 03:10:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:15.402287 | orchestrator | 2026-01-03 03:10:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:15.402380 | orchestrator | 2026-01-03 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:18.443709 | orchestrator | 2026-01-03 03:10:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:18.446521 | orchestrator | 2026-01-03 03:10:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:18.446634 | orchestrator | 2026-01-03 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:21.499939 | orchestrator | 2026-01-03 03:10:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:21.501759 | orchestrator | 2026-01-03 03:10:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:21.501820 | orchestrator | 2026-01-03 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:24.546742 | orchestrator | 2026-01-03 03:10:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:24.548476 | orchestrator | 2026-01-03 03:10:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:24.548642 | orchestrator | 2026-01-03 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:27.600638 | orchestrator | 2026-01-03 03:10:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:27.601866 | orchestrator | 2026-01-03 03:10:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:27.601925 | orchestrator | 2026-01-03 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:30.642906 | orchestrator | 2026-01-03 03:10:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:30.644899 | orchestrator | 2026-01-03 03:10:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:30.644934 | orchestrator | 2026-01-03 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:33.689222 | orchestrator | 2026-01-03 03:10:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:33.690869 | orchestrator | 2026-01-03 03:10:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:33.690913 | orchestrator | 2026-01-03 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:36.741051 | orchestrator | 2026-01-03 03:10:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:36.742890 | orchestrator | 2026-01-03 03:10:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:36.742944 | orchestrator | 2026-01-03 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:39.782632 | orchestrator | 2026-01-03 03:10:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:39.782833 | orchestrator | 2026-01-03 03:10:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:39.782855 | orchestrator | 2026-01-03 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:42.830694 | orchestrator | 2026-01-03 03:10:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:42.833365 | orchestrator | 2026-01-03 03:10:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:42.833458 | orchestrator | 2026-01-03 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:45.876759 | orchestrator | 2026-01-03 03:10:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:45.879296 | orchestrator | 2026-01-03 03:10:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:45.879360 | orchestrator | 2026-01-03 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:48.927603 | orchestrator | 2026-01-03 03:10:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:48.930256 | orchestrator | 2026-01-03 03:10:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:48.930325 | orchestrator | 2026-01-03 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:51.981620 | orchestrator | 2026-01-03 03:10:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:51.982691 | orchestrator | 2026-01-03 03:10:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:51.982780 | orchestrator | 2026-01-03 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:55.028097 | orchestrator | 2026-01-03 03:10:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:55.029791 | orchestrator | 2026-01-03 03:10:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:55.029839 | orchestrator | 2026-01-03 03:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:58.075422 | orchestrator | 2026-01-03 03:10:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:10:58.076924 | orchestrator | 2026-01-03 03:10:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:10:58.076972 | orchestrator | 2026-01-03 03:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:01.120171 | orchestrator | 2026-01-03 03:11:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:01.121734 | orchestrator | 2026-01-03 03:11:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:01.121820 | orchestrator | 2026-01-03 03:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:04.165520 | orchestrator | 2026-01-03 03:11:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:04.167630 | orchestrator | 2026-01-03 03:11:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:04.167684 | orchestrator | 2026-01-03 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:07.211697 | orchestrator | 2026-01-03 03:11:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:07.212842 | orchestrator | 2026-01-03 03:11:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:07.212885 | orchestrator | 2026-01-03 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:10.251982 | orchestrator | 2026-01-03 03:11:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:10.254097 | orchestrator | 2026-01-03 03:11:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:10.254164 | orchestrator | 2026-01-03 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:13.303696 | orchestrator | 2026-01-03 03:11:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:13.305058 | orchestrator | 2026-01-03 03:11:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:13.305270 | orchestrator | 2026-01-03 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:16.350176 | orchestrator | 2026-01-03 03:11:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:16.351674 | orchestrator | 2026-01-03 03:11:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:16.351722 | orchestrator | 2026-01-03 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:19.391089 | orchestrator | 2026-01-03 03:11:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:19.392897 | orchestrator | 2026-01-03 03:11:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:19.392979 | orchestrator | 2026-01-03 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:22.434283 | orchestrator | 2026-01-03 03:11:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:22.435539 | orchestrator | 2026-01-03 03:11:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:22.435691 | orchestrator | 2026-01-03 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:25.484180 | orchestrator | 2026-01-03 03:11:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:25.486697 | orchestrator | 2026-01-03 03:11:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:25.486801 | orchestrator | 2026-01-03 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:28.534308 | orchestrator | 2026-01-03 03:11:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:28.536298 | orchestrator | 2026-01-03 03:11:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:28.536441 | orchestrator | 2026-01-03 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:31.580612 | orchestrator | 2026-01-03 03:11:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:31.581501 | orchestrator | 2026-01-03 03:11:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:31.581531 | orchestrator | 2026-01-03 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:34.626914 | orchestrator | 2026-01-03 03:11:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:34.628629 | orchestrator | 2026-01-03 03:11:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:34.628757 | orchestrator | 2026-01-03 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:37.673174 | orchestrator | 2026-01-03 03:11:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:37.675461 | orchestrator | 2026-01-03 03:11:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:37.675531 | orchestrator | 2026-01-03 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:40.724118 | orchestrator | 2026-01-03 03:11:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:40.726368 | orchestrator | 2026-01-03 03:11:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:40.726454 | orchestrator | 2026-01-03 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:43.767680 | orchestrator | 2026-01-03 03:11:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:43.769638 | orchestrator | 2026-01-03 03:11:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:43.769788 | orchestrator | 2026-01-03 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:46.819367 | orchestrator | 2026-01-03 03:11:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:46.820470 | orchestrator | 2026-01-03 03:11:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:46.820626 | orchestrator | 2026-01-03 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:49.865892 | orchestrator | 2026-01-03 03:11:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:49.867661 | orchestrator | 2026-01-03 03:11:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:49.867878 | orchestrator | 2026-01-03 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:52.910653 | orchestrator | 2026-01-03 03:11:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:52.912680 | orchestrator | 2026-01-03 03:11:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:52.912773 | orchestrator | 2026-01-03 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:55.960705 | orchestrator | 2026-01-03 03:11:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:55.962150 | orchestrator | 2026-01-03 03:11:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:55.962197 | orchestrator | 2026-01-03 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:59.010165 | orchestrator | 2026-01-03 03:11:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:11:59.011881 | orchestrator | 2026-01-03 03:11:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:11:59.011935 | orchestrator | 2026-01-03 03:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:02.059935 | orchestrator | 2026-01-03 03:12:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:02.060029 | orchestrator | 2026-01-03 03:12:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:02.060050 | orchestrator | 2026-01-03 03:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:05.103900 | orchestrator | 2026-01-03 03:12:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:05.105620 | orchestrator | 2026-01-03 03:12:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:05.105753 | orchestrator | 2026-01-03 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:08.153626 | orchestrator | 2026-01-03 03:12:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:08.154947 | orchestrator | 2026-01-03 03:12:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:08.155066 | orchestrator | 2026-01-03 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:11.200180 | orchestrator | 2026-01-03 03:12:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:11.201752 | orchestrator | 2026-01-03 03:12:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:11.201819 | orchestrator | 2026-01-03 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:14.248956 | orchestrator | 2026-01-03 03:12:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:14.251488 | orchestrator | 2026-01-03 03:12:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:14.251555 | orchestrator | 2026-01-03 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:17.294966 | orchestrator | 2026-01-03 03:12:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:17.295169 | orchestrator | 2026-01-03 03:12:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:17.295189 | orchestrator | 2026-01-03 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:20.344352 | orchestrator | 2026-01-03 03:12:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:20.345926 | orchestrator | 2026-01-03 03:12:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:20.345950 | orchestrator | 2026-01-03 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:23.391117 | orchestrator | 2026-01-03 03:12:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:23.392705 | orchestrator | 2026-01-03 03:12:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:23.392775 | orchestrator | 2026-01-03 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:26.440947 | orchestrator | 2026-01-03 03:12:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:26.443148 | orchestrator | 2026-01-03 03:12:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:26.443365 | orchestrator | 2026-01-03 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:29.485488 | orchestrator | 2026-01-03 03:12:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:29.487323 | orchestrator | 2026-01-03 03:12:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:29.487383 | orchestrator | 2026-01-03 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:32.528328 | orchestrator | 2026-01-03 03:12:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:32.529899 | orchestrator | 2026-01-03 03:12:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:32.529950 | orchestrator | 2026-01-03 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:35.578664 | orchestrator | 2026-01-03 03:12:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:35.581715 | orchestrator | 2026-01-03 03:12:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:35.582060 | orchestrator | 2026-01-03 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:38.632661 | orchestrator | 2026-01-03 03:12:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:38.634395 | orchestrator | 2026-01-03 03:12:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:38.634491 | orchestrator | 2026-01-03 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:41.685121 | orchestrator | 2026-01-03 03:12:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:41.686183 | orchestrator | 2026-01-03 03:12:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:41.686222 | orchestrator | 2026-01-03 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:44.734517 | orchestrator | 2026-01-03 03:12:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:44.736540 | orchestrator | 2026-01-03 03:12:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:44.736759 | orchestrator | 2026-01-03 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:47.781250 | orchestrator | 2026-01-03 03:12:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:47.783695 | orchestrator | 2026-01-03 03:12:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:47.783809 | orchestrator | 2026-01-03 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:50.833085 | orchestrator | 2026-01-03 03:12:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:50.834769 | orchestrator | 2026-01-03 03:12:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:50.834871 | orchestrator | 2026-01-03 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:53.885605 | orchestrator | 2026-01-03 03:12:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:53.886717 | orchestrator | 2026-01-03 03:12:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:53.886796 | orchestrator | 2026-01-03 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:56.931237 | orchestrator | 2026-01-03 03:12:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:56.932638 | orchestrator | 2026-01-03 03:12:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:56.932746 | orchestrator | 2026-01-03 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:59.984265 | orchestrator | 2026-01-03 03:12:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:12:59.986787 | orchestrator | 2026-01-03 03:12:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:12:59.986827 | orchestrator | 2026-01-03 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:03.034487 | orchestrator | 2026-01-03 03:13:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:03.038603 | orchestrator | 2026-01-03 03:13:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:03.038932 | orchestrator | 2026-01-03 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:06.084563 | orchestrator | 2026-01-03 03:13:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:06.087627 | orchestrator | 2026-01-03 03:13:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:06.087848 | orchestrator | 2026-01-03 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:09.137134 | orchestrator | 2026-01-03 03:13:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:09.140189 | orchestrator | 2026-01-03 03:13:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:09.140273 | orchestrator | 2026-01-03 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:12.194912 | orchestrator | 2026-01-03 03:13:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:12.197050 | orchestrator | 2026-01-03 03:13:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:12.197185 | orchestrator | 2026-01-03 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:15.246454 | orchestrator | 2026-01-03 03:13:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:15.249980 | orchestrator | 2026-01-03 03:13:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:15.250202 | orchestrator | 2026-01-03 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:18.301406 | orchestrator | 2026-01-03 03:13:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:18.303184 | orchestrator | 2026-01-03 03:13:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:18.303292 | orchestrator | 2026-01-03 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:21.351392 | orchestrator | 2026-01-03 03:13:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:21.354888 | orchestrator | 2026-01-03 03:13:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:21.355004 | orchestrator | 2026-01-03 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:24.400598 | orchestrator | 2026-01-03 03:13:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:24.401686 | orchestrator | 2026-01-03 03:13:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:24.401734 | orchestrator | 2026-01-03 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:27.456535 | orchestrator | 2026-01-03 03:13:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:27.459159 | orchestrator | 2026-01-03 03:13:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:27.459232 | orchestrator | 2026-01-03 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:30.504239 | orchestrator | 2026-01-03 03:13:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:30.505741 | orchestrator | 2026-01-03 03:13:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:30.505854 | orchestrator | 2026-01-03 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:33.553813 | orchestrator | 2026-01-03 03:13:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:33.554840 | orchestrator | 2026-01-03 03:13:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:33.554903 | orchestrator | 2026-01-03 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:36.598178 | orchestrator | 2026-01-03 03:13:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:36.600197 | orchestrator | 2026-01-03 03:13:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:36.600279 | orchestrator | 2026-01-03 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:39.645615 | orchestrator | 2026-01-03 03:13:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:39.648140 | orchestrator | 2026-01-03 03:13:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:39.648216 | orchestrator | 2026-01-03 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:42.701648 | orchestrator | 2026-01-03 03:13:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:42.701739 | orchestrator | 2026-01-03 03:13:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:42.701748 | orchestrator | 2026-01-03 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:45.744192 | orchestrator | 2026-01-03 03:13:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:45.746713 | orchestrator | 2026-01-03 03:13:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:45.746797 | orchestrator | 2026-01-03 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:48.793076 | orchestrator | 2026-01-03 03:13:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:48.794702 | orchestrator | 2026-01-03 03:13:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:48.795227 | orchestrator | 2026-01-03 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:51.836990 | orchestrator | 2026-01-03 03:13:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:51.838387 | orchestrator | 2026-01-03 03:13:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:51.838454 | orchestrator | 2026-01-03 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:54.880959 | orchestrator | 2026-01-03 03:13:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:54.882598 | orchestrator | 2026-01-03 03:13:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:54.882792 | orchestrator | 2026-01-03 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:57.933352 | orchestrator | 2026-01-03 03:13:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:13:57.935040 | orchestrator | 2026-01-03 03:13:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:13:57.935088 | orchestrator | 2026-01-03 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:00.984932 | orchestrator | 2026-01-03 03:14:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:00.986327 | orchestrator | 2026-01-03 03:14:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:00.986370 | orchestrator | 2026-01-03 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:04.038216 | orchestrator | 2026-01-03 03:14:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:04.038758 | orchestrator | 2026-01-03 03:14:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:04.038777 | orchestrator | 2026-01-03 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:07.082377 | orchestrator | 2026-01-03 03:14:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:07.085117 | orchestrator | 2026-01-03 03:14:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:07.085198 | orchestrator | 2026-01-03 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:10.133328 | orchestrator | 2026-01-03 03:14:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:10.134680 | orchestrator | 2026-01-03 03:14:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:10.135171 | orchestrator | 2026-01-03 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:13.186883 | orchestrator | 2026-01-03 03:14:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:13.188184 | orchestrator | 2026-01-03 03:14:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:13.188217 | orchestrator | 2026-01-03 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:16.232870 | orchestrator | 2026-01-03 03:14:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:16.232989 | orchestrator | 2026-01-03 03:14:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:16.232998 | orchestrator | 2026-01-03 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:19.277919 | orchestrator | 2026-01-03 03:14:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:19.279009 | orchestrator | 2026-01-03 03:14:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:19.279192 | orchestrator | 2026-01-03 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:22.326385 | orchestrator | 2026-01-03 03:14:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:22.327873 | orchestrator | 2026-01-03 03:14:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:22.328035 | orchestrator | 2026-01-03 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:25.371102 | orchestrator | 2026-01-03 03:14:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:25.373295 | orchestrator | 2026-01-03 03:14:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:25.373355 | orchestrator | 2026-01-03 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:28.421232 | orchestrator | 2026-01-03 03:14:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:28.422156 | orchestrator | 2026-01-03 03:14:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:28.422226 | orchestrator | 2026-01-03 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:31.468450 | orchestrator | 2026-01-03 03:14:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:31.470557 | orchestrator | 2026-01-03 03:14:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:31.470672 | orchestrator | 2026-01-03 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:34.513804 | orchestrator | 2026-01-03 03:14:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:34.516961 | orchestrator | 2026-01-03 03:14:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:34.517067 | orchestrator | 2026-01-03 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:37.557732 | orchestrator | 2026-01-03 03:14:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:37.559250 | orchestrator | 2026-01-03 03:14:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:37.559455 | orchestrator | 2026-01-03 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:40.606640 | orchestrator | 2026-01-03 03:14:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:40.607872 | orchestrator | 2026-01-03 03:14:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:40.607991 | orchestrator | 2026-01-03 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:43.663836 | orchestrator | 2026-01-03 03:14:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:43.664213 | orchestrator | 2026-01-03 03:14:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:43.664237 | orchestrator | 2026-01-03 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:46.703300 | orchestrator | 2026-01-03 03:14:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:46.706733 | orchestrator | 2026-01-03 03:14:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:46.706802 | orchestrator | 2026-01-03 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:49.753630 | orchestrator | 2026-01-03 03:14:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:49.757477 | orchestrator | 2026-01-03 03:14:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:49.757569 | orchestrator | 2026-01-03 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:52.812421 | orchestrator | 2026-01-03 03:14:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:52.813898 | orchestrator | 2026-01-03 03:14:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:52.813976 | orchestrator | 2026-01-03 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:55.860763 | orchestrator | 2026-01-03 03:14:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:55.862613 | orchestrator | 2026-01-03 03:14:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:55.862669 | orchestrator | 2026-01-03 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:58.916254 | orchestrator | 2026-01-03 03:14:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:14:58.917759 | orchestrator | 2026-01-03 03:14:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:14:58.917800 | orchestrator | 2026-01-03 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:01.966528 | orchestrator | 2026-01-03 03:15:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:01.968438 | orchestrator | 2026-01-03 03:15:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:01.968539 | orchestrator | 2026-01-03 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:05.020380 | orchestrator | 2026-01-03 03:15:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:05.021887 | orchestrator | 2026-01-03 03:15:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:05.021949 | orchestrator | 2026-01-03 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:08.069674 | orchestrator | 2026-01-03 03:15:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:08.072127 | orchestrator | 2026-01-03 03:15:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:08.072201 | orchestrator | 2026-01-03 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:11.126987 | orchestrator | 2026-01-03 03:15:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:11.128041 | orchestrator | 2026-01-03 03:15:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:11.128141 | orchestrator | 2026-01-03 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:14.184414 | orchestrator | 2026-01-03 03:15:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:14.185812 | orchestrator | 2026-01-03 03:15:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:14.185844 | orchestrator | 2026-01-03 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:17.227685 | orchestrator | 2026-01-03 03:15:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:17.230173 | orchestrator | 2026-01-03 03:15:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:17.230223 | orchestrator | 2026-01-03 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:20.280760 | orchestrator | 2026-01-03 03:15:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:20.283811 | orchestrator | 2026-01-03 03:15:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:20.283897 | orchestrator | 2026-01-03 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:23.330096 | orchestrator | 2026-01-03 03:15:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:23.331628 | orchestrator | 2026-01-03 03:15:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:23.331668 | orchestrator | 2026-01-03 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:26.379639 | orchestrator | 2026-01-03 03:15:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:26.381063 | orchestrator | 2026-01-03 03:15:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:26.381348 | orchestrator | 2026-01-03 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:29.432771 | orchestrator | 2026-01-03 03:15:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:29.436375 | orchestrator | 2026-01-03 03:15:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:29.436457 | orchestrator | 2026-01-03 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:32.481476 | orchestrator | 2026-01-03 03:15:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:32.482163 | orchestrator | 2026-01-03 03:15:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:32.482471 | orchestrator | 2026-01-03 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:35.523724 | orchestrator | 2026-01-03 03:15:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:35.524963 | orchestrator | 2026-01-03 03:15:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:35.525002 | orchestrator | 2026-01-03 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:38.570935 | orchestrator | 2026-01-03 03:15:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:38.572590 | orchestrator | 2026-01-03 03:15:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:38.572777 | orchestrator | 2026-01-03 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:41.621975 | orchestrator | 2026-01-03 03:15:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:41.623332 | orchestrator | 2026-01-03 03:15:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:41.623435 | orchestrator | 2026-01-03 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:44.672600 | orchestrator | 2026-01-03 03:15:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:44.674725 | orchestrator | 2026-01-03 03:15:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:44.674817 | orchestrator | 2026-01-03 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:47.718677 | orchestrator | 2026-01-03 03:15:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:47.718773 | orchestrator | 2026-01-03 03:15:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:47.718790 | orchestrator | 2026-01-03 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:50.765586 | orchestrator | 2026-01-03 03:15:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:50.768916 | orchestrator | 2026-01-03 03:15:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:50.768985 | orchestrator | 2026-01-03 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:53.816965 | orchestrator | 2026-01-03 03:15:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:53.817127 | orchestrator | 2026-01-03 03:15:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:53.817142 | orchestrator | 2026-01-03 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:56.860183 | orchestrator | 2026-01-03 03:15:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:56.861925 | orchestrator | 2026-01-03 03:15:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:56.861983 | orchestrator | 2026-01-03 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:59.912353 | orchestrator | 2026-01-03 03:15:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:15:59.914398 | orchestrator | 2026-01-03 03:15:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:15:59.914531 | orchestrator | 2026-01-03 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:02.959505 | orchestrator | 2026-01-03 03:16:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:02.961421 | orchestrator | 2026-01-03 03:16:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:02.961483 | orchestrator | 2026-01-03 03:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:06.007345 | orchestrator | 2026-01-03 03:16:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:06.010303 | orchestrator | 2026-01-03 03:16:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:06.010374 | orchestrator | 2026-01-03 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:09.053390 | orchestrator | 2026-01-03 03:16:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:09.055801 | orchestrator | 2026-01-03 03:16:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:09.055860 | orchestrator | 2026-01-03 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:12.105733 | orchestrator | 2026-01-03 03:16:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:12.107791 | orchestrator | 2026-01-03 03:16:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:12.108381 | orchestrator | 2026-01-03 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:15.153453 | orchestrator | 2026-01-03 03:16:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:15.155528 | orchestrator | 2026-01-03 03:16:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:15.155589 | orchestrator | 2026-01-03 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:18.200097 | orchestrator | 2026-01-03 03:16:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:18.202549 | orchestrator | 2026-01-03 03:16:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:18.202687 | orchestrator | 2026-01-03 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:21.245044 | orchestrator | 2026-01-03 03:16:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:21.245119 | orchestrator | 2026-01-03 03:16:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:21.245126 | orchestrator | 2026-01-03 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:24.287945 | orchestrator | 2026-01-03 03:16:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:24.289560 | orchestrator | 2026-01-03 03:16:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:24.289685 | orchestrator | 2026-01-03 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:27.342058 | orchestrator | 2026-01-03 03:16:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:27.345352 | orchestrator | 2026-01-03 03:16:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:27.345412 | orchestrator | 2026-01-03 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:30.396217 | orchestrator | 2026-01-03 03:16:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:30.397564 | orchestrator | 2026-01-03 03:16:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:30.397644 | orchestrator | 2026-01-03 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:33.446782 | orchestrator | 2026-01-03 03:16:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:33.448045 | orchestrator | 2026-01-03 03:16:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:33.448105 | orchestrator | 2026-01-03 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:36.486116 | orchestrator | 2026-01-03 03:16:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:36.487171 | orchestrator | 2026-01-03 03:16:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:36.487202 | orchestrator | 2026-01-03 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:39.537452 | orchestrator | 2026-01-03 03:16:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:39.539068 | orchestrator | 2026-01-03 03:16:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:39.539149 | orchestrator | 2026-01-03 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:42.582582 | orchestrator | 2026-01-03 03:16:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:42.584029 | orchestrator | 2026-01-03 03:16:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:42.584072 | orchestrator | 2026-01-03 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:45.632491 | orchestrator | 2026-01-03 03:16:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:45.634192 | orchestrator | 2026-01-03 03:16:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:45.634246 | orchestrator | 2026-01-03 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:48.678423 | orchestrator | 2026-01-03 03:16:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:48.680161 | orchestrator | 2026-01-03 03:16:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:48.680216 | orchestrator | 2026-01-03 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:51.725827 | orchestrator | 2026-01-03 03:16:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:51.727719 | orchestrator | 2026-01-03 03:16:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:51.727803 | orchestrator | 2026-01-03 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:54.772173 | orchestrator | 2026-01-03 03:16:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:54.774108 | orchestrator | 2026-01-03 03:16:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:54.774166 | orchestrator | 2026-01-03 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:57.825659 | orchestrator | 2026-01-03 03:16:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:16:57.827956 | orchestrator | 2026-01-03 03:16:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:16:57.828006 | orchestrator | 2026-01-03 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:00.878697 | orchestrator | 2026-01-03 03:17:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:00.880076 | orchestrator | 2026-01-03 03:17:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:00.880117 | orchestrator | 2026-01-03 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:03.923452 | orchestrator | 2026-01-03 03:17:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:03.925645 | orchestrator | 2026-01-03 03:17:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:03.925709 | orchestrator | 2026-01-03 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:06.974990 | orchestrator | 2026-01-03 03:17:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:06.976631 | orchestrator | 2026-01-03 03:17:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:06.976679 | orchestrator | 2026-01-03 03:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:10.023269 | orchestrator | 2026-01-03 03:17:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:10.024861 | orchestrator | 2026-01-03 03:17:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:10.024920 | orchestrator | 2026-01-03 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:13.071769 | orchestrator | 2026-01-03 03:17:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:13.072339 | orchestrator | 2026-01-03 03:17:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:13.072422 | orchestrator | 2026-01-03 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:16.119255 | orchestrator | 2026-01-03 03:17:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:16.120583 | orchestrator | 2026-01-03 03:17:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:16.120685 | orchestrator | 2026-01-03 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:19.171532 | orchestrator | 2026-01-03 03:17:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:19.173153 | orchestrator | 2026-01-03 03:17:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:19.173261 | orchestrator | 2026-01-03 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:22.221220 | orchestrator | 2026-01-03 03:17:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:22.223005 | orchestrator | 2026-01-03 03:17:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:22.223093 | orchestrator | 2026-01-03 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:25.268179 | orchestrator | 2026-01-03 03:17:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:25.269510 | orchestrator | 2026-01-03 03:17:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:25.269549 | orchestrator | 2026-01-03 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:28.314645 | orchestrator | 2026-01-03 03:17:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:28.317450 | orchestrator | 2026-01-03 03:17:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:28.317516 | orchestrator | 2026-01-03 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:31.370715 | orchestrator | 2026-01-03 03:17:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:31.370795 | orchestrator | 2026-01-03 03:17:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:31.370805 | orchestrator | 2026-01-03 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:34.411983 | orchestrator | 2026-01-03 03:17:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:34.413575 | orchestrator | 2026-01-03 03:17:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:34.413635 | orchestrator | 2026-01-03 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:37.462854 | orchestrator | 2026-01-03 03:17:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:37.464460 | orchestrator | 2026-01-03 03:17:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:37.464493 | orchestrator | 2026-01-03 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:40.508460 | orchestrator | 2026-01-03 03:17:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:40.510172 | orchestrator | 2026-01-03 03:17:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:40.510247 | orchestrator | 2026-01-03 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:43.557393 | orchestrator | 2026-01-03 03:17:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:43.559573 | orchestrator | 2026-01-03 03:17:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:43.559723 | orchestrator | 2026-01-03 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:46.608724 | orchestrator | 2026-01-03 03:17:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:46.610503 | orchestrator | 2026-01-03 03:17:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:46.610609 | orchestrator | 2026-01-03 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:49.656121 | orchestrator | 2026-01-03 03:17:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:49.657848 | orchestrator | 2026-01-03 03:17:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:49.657964 | orchestrator | 2026-01-03 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:52.700734 | orchestrator | 2026-01-03 03:17:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:52.707619 | orchestrator | 2026-01-03 03:17:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:52.707701 | orchestrator | 2026-01-03 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:55.746446 | orchestrator | 2026-01-03 03:17:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:55.747929 | orchestrator | 2026-01-03 03:17:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:55.748442 | orchestrator | 2026-01-03 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:58.796561 | orchestrator | 2026-01-03 03:17:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:17:58.799015 | orchestrator | 2026-01-03 03:17:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:17:58.799099 | orchestrator | 2026-01-03 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:01.846394 | orchestrator | 2026-01-03 03:18:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:01.847572 | orchestrator | 2026-01-03 03:18:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:01.847618 | orchestrator | 2026-01-03 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:04.893531 | orchestrator | 2026-01-03 03:18:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:04.895416 | orchestrator | 2026-01-03 03:18:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:04.895464 | orchestrator | 2026-01-03 03:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:07.944799 | orchestrator | 2026-01-03 03:18:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:07.947013 | orchestrator | 2026-01-03 03:18:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:07.947210 | orchestrator | 2026-01-03 03:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:10.995298 | orchestrator | 2026-01-03 03:18:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:10.997858 | orchestrator | 2026-01-03 03:18:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:10.997941 | orchestrator | 2026-01-03 03:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:14.049343 | orchestrator | 2026-01-03 03:18:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:14.051392 | orchestrator | 2026-01-03 03:18:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:14.051614 | orchestrator | 2026-01-03 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:17.101913 | orchestrator | 2026-01-03 03:18:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:17.103943 | orchestrator | 2026-01-03 03:18:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:17.104004 | orchestrator | 2026-01-03 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:20.147470 | orchestrator | 2026-01-03 03:18:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:20.150194 | orchestrator | 2026-01-03 03:18:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:20.150339 | orchestrator | 2026-01-03 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:23.203459 | orchestrator | 2026-01-03 03:18:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:23.204329 | orchestrator | 2026-01-03 03:18:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:23.204779 | orchestrator | 2026-01-03 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:26.256046 | orchestrator | 2026-01-03 03:18:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:26.257475 | orchestrator | 2026-01-03 03:18:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:26.257904 | orchestrator | 2026-01-03 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:29.303758 | orchestrator | 2026-01-03 03:18:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:29.306650 | orchestrator | 2026-01-03 03:18:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:29.306790 | orchestrator | 2026-01-03 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:32.356982 | orchestrator | 2026-01-03 03:18:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:32.361005 | orchestrator | 2026-01-03 03:18:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:32.361080 | orchestrator | 2026-01-03 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:35.404079 | orchestrator | 2026-01-03 03:18:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:35.405891 | orchestrator | 2026-01-03 03:18:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:35.406001 | orchestrator | 2026-01-03 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:38.456421 | orchestrator | 2026-01-03 03:18:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:38.459590 | orchestrator | 2026-01-03 03:18:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:38.459652 | orchestrator | 2026-01-03 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:41.504367 | orchestrator | 2026-01-03 03:18:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:41.506361 | orchestrator | 2026-01-03 03:18:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:41.506454 | orchestrator | 2026-01-03 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:44.548459 | orchestrator | 2026-01-03 03:18:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:44.550161 | orchestrator | 2026-01-03 03:18:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:44.550221 | orchestrator | 2026-01-03 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:47.599108 | orchestrator | 2026-01-03 03:18:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:47.600831 | orchestrator | 2026-01-03 03:18:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:47.600950 | orchestrator | 2026-01-03 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:50.639342 | orchestrator | 2026-01-03 03:18:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:50.641082 | orchestrator | 2026-01-03 03:18:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:50.641160 | orchestrator | 2026-01-03 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:53.688525 | orchestrator | 2026-01-03 03:18:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:53.689519 | orchestrator | 2026-01-03 03:18:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:53.689618 | orchestrator | 2026-01-03 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:56.734555 | orchestrator | 2026-01-03 03:18:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:56.735405 | orchestrator | 2026-01-03 03:18:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:56.735638 | orchestrator | 2026-01-03 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:59.780507 | orchestrator | 2026-01-03 03:18:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:18:59.783305 | orchestrator | 2026-01-03 03:18:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:18:59.783373 | orchestrator | 2026-01-03 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:02.830932 | orchestrator | 2026-01-03 03:19:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:02.832434 | orchestrator | 2026-01-03 03:19:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:02.832494 | orchestrator | 2026-01-03 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:05.875646 | orchestrator | 2026-01-03 03:19:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:05.877430 | orchestrator | 2026-01-03 03:19:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:05.877582 | orchestrator | 2026-01-03 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:08.920087 | orchestrator | 2026-01-03 03:19:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:08.920356 | orchestrator | 2026-01-03 03:19:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:08.920463 | orchestrator | 2026-01-03 03:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:11.965727 | orchestrator | 2026-01-03 03:19:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:11.967529 | orchestrator | 2026-01-03 03:19:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:11.967618 | orchestrator | 2026-01-03 03:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:15.013930 | orchestrator | 2026-01-03 03:19:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:15.015198 | orchestrator | 2026-01-03 03:19:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:15.015500 | orchestrator | 2026-01-03 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:18.058951 | orchestrator | 2026-01-03 03:19:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:18.060201 | orchestrator | 2026-01-03 03:19:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:18.060242 | orchestrator | 2026-01-03 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:21.108265 | orchestrator | 2026-01-03 03:19:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:21.110690 | orchestrator | 2026-01-03 03:19:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:21.110756 | orchestrator | 2026-01-03 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:24.155174 | orchestrator | 2026-01-03 03:19:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:24.156834 | orchestrator | 2026-01-03 03:19:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:24.156882 | orchestrator | 2026-01-03 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:27.202285 | orchestrator | 2026-01-03 03:19:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:27.204495 | orchestrator | 2026-01-03 03:19:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:27.204560 | orchestrator | 2026-01-03 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:30.254802 | orchestrator | 2026-01-03 03:19:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:30.254911 | orchestrator | 2026-01-03 03:19:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:30.254922 | orchestrator | 2026-01-03 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:33.303939 | orchestrator | 2026-01-03 03:19:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:33.305260 | orchestrator | 2026-01-03 03:19:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:33.305344 | orchestrator | 2026-01-03 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:36.345862 | orchestrator | 2026-01-03 03:19:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:36.347951 | orchestrator | 2026-01-03 03:19:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:36.348038 | orchestrator | 2026-01-03 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:39.392093 | orchestrator | 2026-01-03 03:19:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:39.393579 | orchestrator | 2026-01-03 03:19:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:39.393615 | orchestrator | 2026-01-03 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:42.440811 | orchestrator | 2026-01-03 03:19:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:42.441980 | orchestrator | 2026-01-03 03:19:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:42.442071 | orchestrator | 2026-01-03 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:45.492418 | orchestrator | 2026-01-03 03:19:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:45.493549 | orchestrator | 2026-01-03 03:19:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:45.493620 | orchestrator | 2026-01-03 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:48.538002 | orchestrator | 2026-01-03 03:19:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:48.539655 | orchestrator | 2026-01-03 03:19:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:48.539799 | orchestrator | 2026-01-03 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:51.587930 | orchestrator | 2026-01-03 03:19:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:51.589473 | orchestrator | 2026-01-03 03:19:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:51.589512 | orchestrator | 2026-01-03 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:54.641225 | orchestrator | 2026-01-03 03:19:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:54.643909 | orchestrator | 2026-01-03 03:19:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:54.644009 | orchestrator | 2026-01-03 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:57.696152 | orchestrator | 2026-01-03 03:19:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:19:57.699743 | orchestrator | 2026-01-03 03:19:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:19:57.699810 | orchestrator | 2026-01-03 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:00.745884 | orchestrator | 2026-01-03 03:20:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:00.747344 | orchestrator | 2026-01-03 03:20:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:00.747384 | orchestrator | 2026-01-03 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:03.799190 | orchestrator | 2026-01-03 03:20:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:03.800683 | orchestrator | 2026-01-03 03:20:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:03.800762 | orchestrator | 2026-01-03 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:06.848796 | orchestrator | 2026-01-03 03:20:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:06.850915 | orchestrator | 2026-01-03 03:20:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:06.850986 | orchestrator | 2026-01-03 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:09.899111 | orchestrator | 2026-01-03 03:20:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:09.900099 | orchestrator | 2026-01-03 03:20:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:09.900122 | orchestrator | 2026-01-03 03:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:12.949092 | orchestrator | 2026-01-03 03:20:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:12.950280 | orchestrator | 2026-01-03 03:20:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:12.950372 | orchestrator | 2026-01-03 03:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:15.992360 | orchestrator | 2026-01-03 03:20:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:15.993513 | orchestrator | 2026-01-03 03:20:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:15.993625 | orchestrator | 2026-01-03 03:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:19.037524 | orchestrator | 2026-01-03 03:20:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:19.039515 | orchestrator | 2026-01-03 03:20:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:19.039597 | orchestrator | 2026-01-03 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:22.087308 | orchestrator | 2026-01-03 03:20:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:22.089104 | orchestrator | 2026-01-03 03:20:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:22.089184 | orchestrator | 2026-01-03 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:25.137334 | orchestrator | 2026-01-03 03:20:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:25.138263 | orchestrator | 2026-01-03 03:20:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:25.138286 | orchestrator | 2026-01-03 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:28.182164 | orchestrator | 2026-01-03 03:20:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:28.184627 | orchestrator | 2026-01-03 03:20:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:28.184702 | orchestrator | 2026-01-03 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:31.233244 | orchestrator | 2026-01-03 03:20:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:31.233824 | orchestrator | 2026-01-03 03:20:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:31.233884 | orchestrator | 2026-01-03 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:34.281375 | orchestrator | 2026-01-03 03:20:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:34.283060 | orchestrator | 2026-01-03 03:20:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:34.283095 | orchestrator | 2026-01-03 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:37.332299 | orchestrator | 2026-01-03 03:20:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:37.334185 | orchestrator | 2026-01-03 03:20:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:37.334272 | orchestrator | 2026-01-03 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:40.378553 | orchestrator | 2026-01-03 03:20:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:40.380705 | orchestrator | 2026-01-03 03:20:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:40.380787 | orchestrator | 2026-01-03 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:43.426902 | orchestrator | 2026-01-03 03:20:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:43.428215 | orchestrator | 2026-01-03 03:20:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:43.428264 | orchestrator | 2026-01-03 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:46.476250 | orchestrator | 2026-01-03 03:20:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:46.478143 | orchestrator | 2026-01-03 03:20:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:46.478349 | orchestrator | 2026-01-03 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:49.524057 | orchestrator | 2026-01-03 03:20:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:49.526212 | orchestrator | 2026-01-03 03:20:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:49.526423 | orchestrator | 2026-01-03 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:52.575325 | orchestrator | 2026-01-03 03:20:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:52.576583 | orchestrator | 2026-01-03 03:20:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:52.576620 | orchestrator | 2026-01-03 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:55.624385 | orchestrator | 2026-01-03 03:20:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:55.626167 | orchestrator | 2026-01-03 03:20:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:55.626209 | orchestrator | 2026-01-03 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:58.668440 | orchestrator | 2026-01-03 03:20:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:20:58.670567 | orchestrator | 2026-01-03 03:20:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:20:58.670630 | orchestrator | 2026-01-03 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:01.717364 | orchestrator | 2026-01-03 03:21:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:01.718682 | orchestrator | 2026-01-03 03:21:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:01.718716 | orchestrator | 2026-01-03 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:04.765928 | orchestrator | 2026-01-03 03:21:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:04.767154 | orchestrator | 2026-01-03 03:21:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:04.767208 | orchestrator | 2026-01-03 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:07.812120 | orchestrator | 2026-01-03 03:21:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:07.813584 | orchestrator | 2026-01-03 03:21:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:07.813634 | orchestrator | 2026-01-03 03:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:10.860379 | orchestrator | 2026-01-03 03:21:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:10.861976 | orchestrator | 2026-01-03 03:21:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:10.862064 | orchestrator | 2026-01-03 03:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:13.909910 | orchestrator | 2026-01-03 03:21:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:13.911934 | orchestrator | 2026-01-03 03:21:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:13.912025 | orchestrator | 2026-01-03 03:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:16.961001 | orchestrator | 2026-01-03 03:21:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:16.963092 | orchestrator | 2026-01-03 03:21:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:16.963180 | orchestrator | 2026-01-03 03:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:20.005677 | orchestrator | 2026-01-03 03:21:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:20.006581 | orchestrator | 2026-01-03 03:21:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:20.006637 | orchestrator | 2026-01-03 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:23.054924 | orchestrator | 2026-01-03 03:21:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:23.056563 | orchestrator | 2026-01-03 03:21:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:23.056645 | orchestrator | 2026-01-03 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:26.096736 | orchestrator | 2026-01-03 03:21:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:26.098728 | orchestrator | 2026-01-03 03:21:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:26.098795 | orchestrator | 2026-01-03 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:29.134855 | orchestrator | 2026-01-03 03:21:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:29.136325 | orchestrator | 2026-01-03 03:21:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:29.136421 | orchestrator | 2026-01-03 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:32.184543 | orchestrator | 2026-01-03 03:21:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:32.186080 | orchestrator | 2026-01-03 03:21:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:32.186116 | orchestrator | 2026-01-03 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:35.235509 | orchestrator | 2026-01-03 03:21:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:35.236393 | orchestrator | 2026-01-03 03:21:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:35.236452 | orchestrator | 2026-01-03 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:38.285999 | orchestrator | 2026-01-03 03:21:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:38.287022 | orchestrator | 2026-01-03 03:21:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:38.287296 | orchestrator | 2026-01-03 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:41.330075 | orchestrator | 2026-01-03 03:21:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:41.332792 | orchestrator | 2026-01-03 03:21:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:41.332903 | orchestrator | 2026-01-03 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:44.380467 | orchestrator | 2026-01-03 03:21:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:44.383199 | orchestrator | 2026-01-03 03:21:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:44.383273 | orchestrator | 2026-01-03 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:47.430688 | orchestrator | 2026-01-03 03:21:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:47.432713 | orchestrator | 2026-01-03 03:21:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:47.432763 | orchestrator | 2026-01-03 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:50.482497 | orchestrator | 2026-01-03 03:21:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:50.485971 | orchestrator | 2026-01-03 03:21:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:50.486129 | orchestrator | 2026-01-03 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:53.540769 | orchestrator | 2026-01-03 03:21:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:53.542635 | orchestrator | 2026-01-03 03:21:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:53.542687 | orchestrator | 2026-01-03 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:56.593432 | orchestrator | 2026-01-03 03:21:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:56.596858 | orchestrator | 2026-01-03 03:21:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:56.597152 | orchestrator | 2026-01-03 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:59.647565 | orchestrator | 2026-01-03 03:21:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:21:59.648312 | orchestrator | 2026-01-03 03:21:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:21:59.648391 | orchestrator | 2026-01-03 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:02.692401 | orchestrator | 2026-01-03 03:22:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:02.694351 | orchestrator | 2026-01-03 03:22:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:02.694395 | orchestrator | 2026-01-03 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:05.740788 | orchestrator | 2026-01-03 03:22:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:05.742093 | orchestrator | 2026-01-03 03:22:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:05.742283 | orchestrator | 2026-01-03 03:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:08.792690 | orchestrator | 2026-01-03 03:22:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:08.795699 | orchestrator | 2026-01-03 03:22:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:08.795764 | orchestrator | 2026-01-03 03:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:11.844934 | orchestrator | 2026-01-03 03:22:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:11.847926 | orchestrator | 2026-01-03 03:22:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:11.848005 | orchestrator | 2026-01-03 03:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:14.905089 | orchestrator | 2026-01-03 03:22:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:14.907470 | orchestrator | 2026-01-03 03:22:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:14.907874 | orchestrator | 2026-01-03 03:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:17.956710 | orchestrator | 2026-01-03 03:22:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:17.957926 | orchestrator | 2026-01-03 03:22:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:17.957964 | orchestrator | 2026-01-03 03:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:21.007534 | orchestrator | 2026-01-03 03:22:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:21.010401 | orchestrator | 2026-01-03 03:22:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:21.010506 | orchestrator | 2026-01-03 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:24.065889 | orchestrator | 2026-01-03 03:22:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:24.066521 | orchestrator | 2026-01-03 03:22:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:24.066574 | orchestrator | 2026-01-03 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:27.112729 | orchestrator | 2026-01-03 03:22:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:27.114094 | orchestrator | 2026-01-03 03:22:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:27.114148 | orchestrator | 2026-01-03 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:30.161546 | orchestrator | 2026-01-03 03:22:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:30.162962 | orchestrator | 2026-01-03 03:22:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:30.163011 | orchestrator | 2026-01-03 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:33.207738 | orchestrator | 2026-01-03 03:22:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:33.209122 | orchestrator | 2026-01-03 03:22:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:33.209207 | orchestrator | 2026-01-03 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:36.261594 | orchestrator | 2026-01-03 03:22:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:36.263975 | orchestrator | 2026-01-03 03:22:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:36.264055 | orchestrator | 2026-01-03 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:39.315614 | orchestrator | 2026-01-03 03:22:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:39.318826 | orchestrator | 2026-01-03 03:22:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:39.318981 | orchestrator | 2026-01-03 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:42.380303 | orchestrator | 2026-01-03 03:22:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:42.382199 | orchestrator | 2026-01-03 03:22:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:42.383113 | orchestrator | 2026-01-03 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:45.430141 | orchestrator | 2026-01-03 03:22:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:45.431731 | orchestrator | 2026-01-03 03:22:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:45.432110 | orchestrator | 2026-01-03 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:48.477843 | orchestrator | 2026-01-03 03:22:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:48.478367 | orchestrator | 2026-01-03 03:22:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:48.478391 | orchestrator | 2026-01-03 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:51.532353 | orchestrator | 2026-01-03 03:22:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:51.533987 | orchestrator | 2026-01-03 03:22:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:51.534154 | orchestrator | 2026-01-03 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:54.582497 | orchestrator | 2026-01-03 03:22:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:54.583345 | orchestrator | 2026-01-03 03:22:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:54.583380 | orchestrator | 2026-01-03 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:57.633180 | orchestrator | 2026-01-03 03:22:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:22:57.634196 | orchestrator | 2026-01-03 03:22:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:22:57.634282 | orchestrator | 2026-01-03 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:00.682414 | orchestrator | 2026-01-03 03:23:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:00.684241 | orchestrator | 2026-01-03 03:23:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:00.684304 | orchestrator | 2026-01-03 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:03.735065 | orchestrator | 2026-01-03 03:23:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:03.735882 | orchestrator | 2026-01-03 03:23:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:03.735913 | orchestrator | 2026-01-03 03:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:06.786989 | orchestrator | 2026-01-03 03:23:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:06.789375 | orchestrator | 2026-01-03 03:23:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:06.789461 | orchestrator | 2026-01-03 03:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:09.841709 | orchestrator | 2026-01-03 03:23:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:09.844745 | orchestrator | 2026-01-03 03:23:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:09.844907 | orchestrator | 2026-01-03 03:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:12.892711 | orchestrator | 2026-01-03 03:23:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:12.894258 | orchestrator | 2026-01-03 03:23:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:12.894659 | orchestrator | 2026-01-03 03:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:15.947413 | orchestrator | 2026-01-03 03:23:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:15.949674 | orchestrator | 2026-01-03 03:23:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:15.949730 | orchestrator | 2026-01-03 03:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:18.997588 | orchestrator | 2026-01-03 03:23:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:18.998859 | orchestrator | 2026-01-03 03:23:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:18.998899 | orchestrator | 2026-01-03 03:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:22.045358 | orchestrator | 2026-01-03 03:23:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:22.046772 | orchestrator | 2026-01-03 03:23:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:22.046858 | orchestrator | 2026-01-03 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:25.098950 | orchestrator | 2026-01-03 03:23:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:25.100340 | orchestrator | 2026-01-03 03:23:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:25.100401 | orchestrator | 2026-01-03 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:28.144076 | orchestrator | 2026-01-03 03:23:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:28.145549 | orchestrator | 2026-01-03 03:23:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:28.146274 | orchestrator | 2026-01-03 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:31.197811 | orchestrator | 2026-01-03 03:23:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:31.199620 | orchestrator | 2026-01-03 03:23:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:31.199678 | orchestrator | 2026-01-03 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:34.245713 | orchestrator | 2026-01-03 03:23:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:34.248384 | orchestrator | 2026-01-03 03:23:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:34.248457 | orchestrator | 2026-01-03 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:37.289268 | orchestrator | 2026-01-03 03:23:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:37.291682 | orchestrator | 2026-01-03 03:23:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:37.291751 | orchestrator | 2026-01-03 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:40.340588 | orchestrator | 2026-01-03 03:23:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:40.341293 | orchestrator | 2026-01-03 03:23:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:40.341341 | orchestrator | 2026-01-03 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:43.390289 | orchestrator | 2026-01-03 03:23:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:43.391576 | orchestrator | 2026-01-03 03:23:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:43.391639 | orchestrator | 2026-01-03 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:46.437830 | orchestrator | 2026-01-03 03:23:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:46.439276 | orchestrator | 2026-01-03 03:23:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:46.439380 | orchestrator | 2026-01-03 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:49.484188 | orchestrator | 2026-01-03 03:23:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:49.486275 | orchestrator | 2026-01-03 03:23:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:49.486314 | orchestrator | 2026-01-03 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:52.530669 | orchestrator | 2026-01-03 03:23:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:52.669982 | orchestrator | 2026-01-03 03:23:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:52.670156 | orchestrator | 2026-01-03 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:55.583967 | orchestrator | 2026-01-03 03:23:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:55.585765 | orchestrator | 2026-01-03 03:23:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:55.585824 | orchestrator | 2026-01-03 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:58.630526 | orchestrator | 2026-01-03 03:23:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:23:58.631695 | orchestrator | 2026-01-03 03:23:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:23:58.631973 | orchestrator | 2026-01-03 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:01.677767 | orchestrator | 2026-01-03 03:24:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:01.678438 | orchestrator | 2026-01-03 03:24:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:01.678481 | orchestrator | 2026-01-03 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:04.724717 | orchestrator | 2026-01-03 03:24:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:04.725622 | orchestrator | 2026-01-03 03:24:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:04.725648 | orchestrator | 2026-01-03 03:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:07.776898 | orchestrator | 2026-01-03 03:24:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:07.778976 | orchestrator | 2026-01-03 03:24:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:07.779067 | orchestrator | 2026-01-03 03:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:10.828830 | orchestrator | 2026-01-03 03:24:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:10.829941 | orchestrator | 2026-01-03 03:24:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:10.830069 | orchestrator | 2026-01-03 03:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:13.878691 | orchestrator | 2026-01-03 03:24:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:13.879760 | orchestrator | 2026-01-03 03:24:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:13.880072 | orchestrator | 2026-01-03 03:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:16.922305 | orchestrator | 2026-01-03 03:24:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:16.924486 | orchestrator | 2026-01-03 03:24:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:16.924541 | orchestrator | 2026-01-03 03:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:19.964750 | orchestrator | 2026-01-03 03:24:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:19.966638 | orchestrator | 2026-01-03 03:24:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:19.966726 | orchestrator | 2026-01-03 03:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:23.029874 | orchestrator | 2026-01-03 03:24:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:23.029960 | orchestrator | 2026-01-03 03:24:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:23.030095 | orchestrator | 2026-01-03 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:26.079023 | orchestrator | 2026-01-03 03:24:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:26.080289 | orchestrator | 2026-01-03 03:24:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:26.080338 | orchestrator | 2026-01-03 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:29.129246 | orchestrator | 2026-01-03 03:24:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:29.129568 | orchestrator | 2026-01-03 03:24:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:29.129596 | orchestrator | 2026-01-03 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:32.178763 | orchestrator | 2026-01-03 03:24:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:32.179914 | orchestrator | 2026-01-03 03:24:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:32.179958 | orchestrator | 2026-01-03 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:35.224891 | orchestrator | 2026-01-03 03:24:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:35.226989 | orchestrator | 2026-01-03 03:24:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:35.227155 | orchestrator | 2026-01-03 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:38.274244 | orchestrator | 2026-01-03 03:24:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:38.275909 | orchestrator | 2026-01-03 03:24:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:38.276382 | orchestrator | 2026-01-03 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:41.325624 | orchestrator | 2026-01-03 03:24:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:41.327803 | orchestrator | 2026-01-03 03:24:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:41.327974 | orchestrator | 2026-01-03 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:44.373790 | orchestrator | 2026-01-03 03:24:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:44.375992 | orchestrator | 2026-01-03 03:24:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:44.376282 | orchestrator | 2026-01-03 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:47.421619 | orchestrator | 2026-01-03 03:24:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:47.422514 | orchestrator | 2026-01-03 03:24:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:47.422635 | orchestrator | 2026-01-03 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:50.471589 | orchestrator | 2026-01-03 03:24:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:50.473407 | orchestrator | 2026-01-03 03:24:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:50.473464 | orchestrator | 2026-01-03 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:53.521061 | orchestrator | 2026-01-03 03:24:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:53.522801 | orchestrator | 2026-01-03 03:24:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:53.522905 | orchestrator | 2026-01-03 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:56.573325 | orchestrator | 2026-01-03 03:24:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:56.575742 | orchestrator | 2026-01-03 03:24:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:56.575825 | orchestrator | 2026-01-03 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:59.621967 | orchestrator | 2026-01-03 03:24:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:24:59.623325 | orchestrator | 2026-01-03 03:24:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:24:59.623463 | orchestrator | 2026-01-03 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:02.671732 | orchestrator | 2026-01-03 03:25:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:02.672755 | orchestrator | 2026-01-03 03:25:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:02.672814 | orchestrator | 2026-01-03 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:05.721486 | orchestrator | 2026-01-03 03:25:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:05.723325 | orchestrator | 2026-01-03 03:25:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:05.723388 | orchestrator | 2026-01-03 03:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:08.769033 | orchestrator | 2026-01-03 03:25:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:08.770151 | orchestrator | 2026-01-03 03:25:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:08.770229 | orchestrator | 2026-01-03 03:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:11.812901 | orchestrator | 2026-01-03 03:25:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:11.815907 | orchestrator | 2026-01-03 03:25:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:11.815983 | orchestrator | 2026-01-03 03:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:14.858200 | orchestrator | 2026-01-03 03:25:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:14.858877 | orchestrator | 2026-01-03 03:25:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:14.858950 | orchestrator | 2026-01-03 03:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:17.902150 | orchestrator | 2026-01-03 03:25:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:17.904429 | orchestrator | 2026-01-03 03:25:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:17.904537 | orchestrator | 2026-01-03 03:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:20.951931 | orchestrator | 2026-01-03 03:25:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:20.954290 | orchestrator | 2026-01-03 03:25:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:20.954338 | orchestrator | 2026-01-03 03:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:24.006066 | orchestrator | 2026-01-03 03:25:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:24.007764 | orchestrator | 2026-01-03 03:25:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:24.007864 | orchestrator | 2026-01-03 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:27.057463 | orchestrator | 2026-01-03 03:25:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:27.059858 | orchestrator | 2026-01-03 03:25:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:27.059938 | orchestrator | 2026-01-03 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:30.099151 | orchestrator | 2026-01-03 03:25:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:30.101560 | orchestrator | 2026-01-03 03:25:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:30.101631 | orchestrator | 2026-01-03 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:33.146365 | orchestrator | 2026-01-03 03:25:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:33.147332 | orchestrator | 2026-01-03 03:25:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:33.147645 | orchestrator | 2026-01-03 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:36.198258 | orchestrator | 2026-01-03 03:25:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:36.200407 | orchestrator | 2026-01-03 03:25:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:36.200611 | orchestrator | 2026-01-03 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:39.247255 | orchestrator | 2026-01-03 03:25:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:39.247787 | orchestrator | 2026-01-03 03:25:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:39.247899 | orchestrator | 2026-01-03 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:42.291510 | orchestrator | 2026-01-03 03:25:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:42.293182 | orchestrator | 2026-01-03 03:25:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:42.293415 | orchestrator | 2026-01-03 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:45.331588 | orchestrator | 2026-01-03 03:25:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:45.331741 | orchestrator | 2026-01-03 03:25:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:45.331757 | orchestrator | 2026-01-03 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:48.384600 | orchestrator | 2026-01-03 03:25:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:48.386521 | orchestrator | 2026-01-03 03:25:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:48.386623 | orchestrator | 2026-01-03 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:51.434099 | orchestrator | 2026-01-03 03:25:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:51.436233 | orchestrator | 2026-01-03 03:25:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:51.436483 | orchestrator | 2026-01-03 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:54.483859 | orchestrator | 2026-01-03 03:25:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:54.483952 | orchestrator | 2026-01-03 03:25:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:54.483965 | orchestrator | 2026-01-03 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:57.526370 | orchestrator | 2026-01-03 03:25:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:25:57.527871 | orchestrator | 2026-01-03 03:25:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:25:57.527932 | orchestrator | 2026-01-03 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:00.570536 | orchestrator | 2026-01-03 03:26:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:00.572226 | orchestrator | 2026-01-03 03:26:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:00.572504 | orchestrator | 2026-01-03 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:03.619392 | orchestrator | 2026-01-03 03:26:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:03.620752 | orchestrator | 2026-01-03 03:26:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:03.620816 | orchestrator | 2026-01-03 03:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:06.665076 | orchestrator | 2026-01-03 03:26:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:06.667259 | orchestrator | 2026-01-03 03:26:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:06.667284 | orchestrator | 2026-01-03 03:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:09.714184 | orchestrator | 2026-01-03 03:26:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:09.715276 | orchestrator | 2026-01-03 03:26:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:09.715298 | orchestrator | 2026-01-03 03:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:12.767090 | orchestrator | 2026-01-03 03:26:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:12.769104 | orchestrator | 2026-01-03 03:26:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:12.769290 | orchestrator | 2026-01-03 03:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:15.813687 | orchestrator | 2026-01-03 03:26:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:15.815646 | orchestrator | 2026-01-03 03:26:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:15.815700 | orchestrator | 2026-01-03 03:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:18.862319 | orchestrator | 2026-01-03 03:26:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:18.864860 | orchestrator | 2026-01-03 03:26:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:18.864930 | orchestrator | 2026-01-03 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:21.911999 | orchestrator | 2026-01-03 03:26:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:21.913854 | orchestrator | 2026-01-03 03:26:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:21.914194 | orchestrator | 2026-01-03 03:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:24.960060 | orchestrator | 2026-01-03 03:26:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:24.962645 | orchestrator | 2026-01-03 03:26:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:24.962936 | orchestrator | 2026-01-03 03:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:28.016865 | orchestrator | 2026-01-03 03:26:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:28.018121 | orchestrator | 2026-01-03 03:26:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:28.018262 | orchestrator | 2026-01-03 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:31.060936 | orchestrator | 2026-01-03 03:26:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:31.062744 | orchestrator | 2026-01-03 03:26:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:31.062836 | orchestrator | 2026-01-03 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:34.107800 | orchestrator | 2026-01-03 03:26:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:34.109546 | orchestrator | 2026-01-03 03:26:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:34.109579 | orchestrator | 2026-01-03 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:37.160466 | orchestrator | 2026-01-03 03:26:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:37.163482 | orchestrator | 2026-01-03 03:26:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:37.163717 | orchestrator | 2026-01-03 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:40.207596 | orchestrator | 2026-01-03 03:26:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:40.209623 | orchestrator | 2026-01-03 03:26:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:40.209674 | orchestrator | 2026-01-03 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:43.259108 | orchestrator | 2026-01-03 03:26:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:43.259788 | orchestrator | 2026-01-03 03:26:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:43.259929 | orchestrator | 2026-01-03 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:46.302531 | orchestrator | 2026-01-03 03:26:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:46.304413 | orchestrator | 2026-01-03 03:26:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:46.304485 | orchestrator | 2026-01-03 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:49.346467 | orchestrator | 2026-01-03 03:26:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:49.348249 | orchestrator | 2026-01-03 03:26:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:49.348324 | orchestrator | 2026-01-03 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:52.389906 | orchestrator | 2026-01-03 03:26:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:52.391569 | orchestrator | 2026-01-03 03:26:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:52.391647 | orchestrator | 2026-01-03 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:55.430462 | orchestrator | 2026-01-03 03:26:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:55.432834 | orchestrator | 2026-01-03 03:26:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:55.432885 | orchestrator | 2026-01-03 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:58.480634 | orchestrator | 2026-01-03 03:26:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:26:58.482308 | orchestrator | 2026-01-03 03:26:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:26:58.482392 | orchestrator | 2026-01-03 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:01.524546 | orchestrator | 2026-01-03 03:27:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:01.525756 | orchestrator | 2026-01-03 03:27:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:01.525838 | orchestrator | 2026-01-03 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:04.575598 | orchestrator | 2026-01-03 03:27:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:04.579490 | orchestrator | 2026-01-03 03:27:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:04.579564 | orchestrator | 2026-01-03 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:07.631991 | orchestrator | 2026-01-03 03:27:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:07.633647 | orchestrator | 2026-01-03 03:27:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:07.634257 | orchestrator | 2026-01-03 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:10.678328 | orchestrator | 2026-01-03 03:27:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:10.680276 | orchestrator | 2026-01-03 03:27:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:10.680326 | orchestrator | 2026-01-03 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:13.732946 | orchestrator | 2026-01-03 03:27:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:13.734463 | orchestrator | 2026-01-03 03:27:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:13.734489 | orchestrator | 2026-01-03 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:16.779797 | orchestrator | 2026-01-03 03:27:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:16.781598 | orchestrator | 2026-01-03 03:27:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:16.781691 | orchestrator | 2026-01-03 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:19.819405 | orchestrator | 2026-01-03 03:27:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:19.821195 | orchestrator | 2026-01-03 03:27:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:19.821274 | orchestrator | 2026-01-03 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:22.871296 | orchestrator | 2026-01-03 03:27:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:22.872646 | orchestrator | 2026-01-03 03:27:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:22.872956 | orchestrator | 2026-01-03 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:25.922141 | orchestrator | 2026-01-03 03:27:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:25.923290 | orchestrator | 2026-01-03 03:27:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:25.923372 | orchestrator | 2026-01-03 03:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:28.968218 | orchestrator | 2026-01-03 03:27:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:28.970095 | orchestrator | 2026-01-03 03:27:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:28.970112 | orchestrator | 2026-01-03 03:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:32.016719 | orchestrator | 2026-01-03 03:27:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:32.018482 | orchestrator | 2026-01-03 03:27:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:32.018543 | orchestrator | 2026-01-03 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:35.072195 | orchestrator | 2026-01-03 03:27:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:35.073110 | orchestrator | 2026-01-03 03:27:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:35.073371 | orchestrator | 2026-01-03 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:38.121504 | orchestrator | 2026-01-03 03:27:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:38.123649 | orchestrator | 2026-01-03 03:27:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:38.123762 | orchestrator | 2026-01-03 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:41.167417 | orchestrator | 2026-01-03 03:27:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:41.168900 | orchestrator | 2026-01-03 03:27:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:41.168960 | orchestrator | 2026-01-03 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:44.218447 | orchestrator | 2026-01-03 03:27:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:44.219988 | orchestrator | 2026-01-03 03:27:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:44.220017 | orchestrator | 2026-01-03 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:47.260283 | orchestrator | 2026-01-03 03:27:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:47.261595 | orchestrator | 2026-01-03 03:27:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:47.261676 | orchestrator | 2026-01-03 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:50.307828 | orchestrator | 2026-01-03 03:27:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:50.309854 | orchestrator | 2026-01-03 03:27:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:50.309889 | orchestrator | 2026-01-03 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:53.352935 | orchestrator | 2026-01-03 03:27:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:53.353796 | orchestrator | 2026-01-03 03:27:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:53.353882 | orchestrator | 2026-01-03 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:56.397852 | orchestrator | 2026-01-03 03:27:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:56.399758 | orchestrator | 2026-01-03 03:27:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:56.400088 | orchestrator | 2026-01-03 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:59.446832 | orchestrator | 2026-01-03 03:27:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:27:59.447779 | orchestrator | 2026-01-03 03:27:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:27:59.447796 | orchestrator | 2026-01-03 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:02.495080 | orchestrator | 2026-01-03 03:28:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:02.498093 | orchestrator | 2026-01-03 03:28:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:02.498151 | orchestrator | 2026-01-03 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:05.545059 | orchestrator | 2026-01-03 03:28:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:05.546873 | orchestrator | 2026-01-03 03:28:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:05.546962 | orchestrator | 2026-01-03 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:08.594409 | orchestrator | 2026-01-03 03:28:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:08.596548 | orchestrator | 2026-01-03 03:28:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:08.596618 | orchestrator | 2026-01-03 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:11.641232 | orchestrator | 2026-01-03 03:28:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:11.643000 | orchestrator | 2026-01-03 03:28:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:11.643065 | orchestrator | 2026-01-03 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:14.686335 | orchestrator | 2026-01-03 03:28:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:14.687706 | orchestrator | 2026-01-03 03:28:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:14.687760 | orchestrator | 2026-01-03 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:17.732837 | orchestrator | 2026-01-03 03:28:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:17.734532 | orchestrator | 2026-01-03 03:28:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:17.734583 | orchestrator | 2026-01-03 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:20.775363 | orchestrator | 2026-01-03 03:28:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:20.776903 | orchestrator | 2026-01-03 03:28:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:20.776953 | orchestrator | 2026-01-03 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:23.822809 | orchestrator | 2026-01-03 03:28:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:23.824627 | orchestrator | 2026-01-03 03:28:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:23.824659 | orchestrator | 2026-01-03 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:26.871864 | orchestrator | 2026-01-03 03:28:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:26.874009 | orchestrator | 2026-01-03 03:28:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:26.874148 | orchestrator | 2026-01-03 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:29.917137 | orchestrator | 2026-01-03 03:28:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:29.918583 | orchestrator | 2026-01-03 03:28:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:29.918622 | orchestrator | 2026-01-03 03:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:32.961020 | orchestrator | 2026-01-03 03:28:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:32.962992 | orchestrator | 2026-01-03 03:28:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:32.963093 | orchestrator | 2026-01-03 03:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:36.008239 | orchestrator | 2026-01-03 03:28:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:36.010521 | orchestrator | 2026-01-03 03:28:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:36.010588 | orchestrator | 2026-01-03 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:39.056680 | orchestrator | 2026-01-03 03:28:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:39.058929 | orchestrator | 2026-01-03 03:28:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:39.059050 | orchestrator | 2026-01-03 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:42.103381 | orchestrator | 2026-01-03 03:28:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:42.105431 | orchestrator | 2026-01-03 03:28:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:42.105466 | orchestrator | 2026-01-03 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:45.154687 | orchestrator | 2026-01-03 03:28:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:45.156661 | orchestrator | 2026-01-03 03:28:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:45.156730 | orchestrator | 2026-01-03 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:48.202845 | orchestrator | 2026-01-03 03:28:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:48.206100 | orchestrator | 2026-01-03 03:28:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:48.206143 | orchestrator | 2026-01-03 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:51.254251 | orchestrator | 2026-01-03 03:28:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:51.255369 | orchestrator | 2026-01-03 03:28:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:51.255422 | orchestrator | 2026-01-03 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:54.304824 | orchestrator | 2026-01-03 03:28:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:54.305921 | orchestrator | 2026-01-03 03:28:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:54.305997 | orchestrator | 2026-01-03 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:57.350428 | orchestrator | 2026-01-03 03:28:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:28:57.352333 | orchestrator | 2026-01-03 03:28:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:28:57.352376 | orchestrator | 2026-01-03 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:00.396386 | orchestrator | 2026-01-03 03:29:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:00.397433 | orchestrator | 2026-01-03 03:29:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:00.397650 | orchestrator | 2026-01-03 03:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:03.444053 | orchestrator | 2026-01-03 03:29:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:03.446897 | orchestrator | 2026-01-03 03:29:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:03.446942 | orchestrator | 2026-01-03 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:06.492779 | orchestrator | 2026-01-03 03:29:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:06.495692 | orchestrator | 2026-01-03 03:29:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:06.495804 | orchestrator | 2026-01-03 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:09.537845 | orchestrator | 2026-01-03 03:29:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:09.539968 | orchestrator | 2026-01-03 03:29:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:09.540078 | orchestrator | 2026-01-03 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:12.588190 | orchestrator | 2026-01-03 03:29:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:12.589786 | orchestrator | 2026-01-03 03:29:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:12.589834 | orchestrator | 2026-01-03 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:15.638005 | orchestrator | 2026-01-03 03:29:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:15.640785 | orchestrator | 2026-01-03 03:29:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:15.640882 | orchestrator | 2026-01-03 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:18.686971 | orchestrator | 2026-01-03 03:29:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:18.687942 | orchestrator | 2026-01-03 03:29:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:18.687998 | orchestrator | 2026-01-03 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:21.733095 | orchestrator | 2026-01-03 03:29:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:21.735304 | orchestrator | 2026-01-03 03:29:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:21.735461 | orchestrator | 2026-01-03 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:24.782750 | orchestrator | 2026-01-03 03:29:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:24.783967 | orchestrator | 2026-01-03 03:29:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:24.784036 | orchestrator | 2026-01-03 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:27.827113 | orchestrator | 2026-01-03 03:29:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:27.828173 | orchestrator | 2026-01-03 03:29:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:27.828490 | orchestrator | 2026-01-03 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:30.876619 | orchestrator | 2026-01-03 03:29:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:30.878979 | orchestrator | 2026-01-03 03:29:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:30.879044 | orchestrator | 2026-01-03 03:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:33.926693 | orchestrator | 2026-01-03 03:29:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:33.928089 | orchestrator | 2026-01-03 03:29:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:33.928135 | orchestrator | 2026-01-03 03:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:36.971661 | orchestrator | 2026-01-03 03:29:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:36.973728 | orchestrator | 2026-01-03 03:29:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:36.973790 | orchestrator | 2026-01-03 03:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:40.020831 | orchestrator | 2026-01-03 03:29:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:40.023042 | orchestrator | 2026-01-03 03:29:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:40.023107 | orchestrator | 2026-01-03 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:43.068197 | orchestrator | 2026-01-03 03:29:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:43.072243 | orchestrator | 2026-01-03 03:29:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:43.072308 | orchestrator | 2026-01-03 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:46.114277 | orchestrator | 2026-01-03 03:29:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:46.115304 | orchestrator | 2026-01-03 03:29:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:46.115633 | orchestrator | 2026-01-03 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:49.160575 | orchestrator | 2026-01-03 03:29:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:49.161854 | orchestrator | 2026-01-03 03:29:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:49.161881 | orchestrator | 2026-01-03 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:52.205640 | orchestrator | 2026-01-03 03:29:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:52.207030 | orchestrator | 2026-01-03 03:29:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:52.207094 | orchestrator | 2026-01-03 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:55.249169 | orchestrator | 2026-01-03 03:29:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:55.250518 | orchestrator | 2026-01-03 03:29:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:55.250623 | orchestrator | 2026-01-03 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:58.294448 | orchestrator | 2026-01-03 03:29:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:29:58.296661 | orchestrator | 2026-01-03 03:29:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:29:58.296708 | orchestrator | 2026-01-03 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:01.342580 | orchestrator | 2026-01-03 03:30:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:01.344992 | orchestrator | 2026-01-03 03:30:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:01.345041 | orchestrator | 2026-01-03 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:04.391198 | orchestrator | 2026-01-03 03:30:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:04.393047 | orchestrator | 2026-01-03 03:30:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:04.393101 | orchestrator | 2026-01-03 03:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:07.434595 | orchestrator | 2026-01-03 03:30:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:07.436302 | orchestrator | 2026-01-03 03:30:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:07.436569 | orchestrator | 2026-01-03 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:10.480905 | orchestrator | 2026-01-03 03:30:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:10.483454 | orchestrator | 2026-01-03 03:30:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:10.483601 | orchestrator | 2026-01-03 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:13.531109 | orchestrator | 2026-01-03 03:30:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:13.533580 | orchestrator | 2026-01-03 03:30:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:13.533630 | orchestrator | 2026-01-03 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:16.583516 | orchestrator | 2026-01-03 03:30:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:16.585399 | orchestrator | 2026-01-03 03:30:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:16.585475 | orchestrator | 2026-01-03 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:19.626843 | orchestrator | 2026-01-03 03:30:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:19.627103 | orchestrator | 2026-01-03 03:30:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:19.627124 | orchestrator | 2026-01-03 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:22.672864 | orchestrator | 2026-01-03 03:30:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:22.674604 | orchestrator | 2026-01-03 03:30:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:22.674737 | orchestrator | 2026-01-03 03:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:25.723535 | orchestrator | 2026-01-03 03:30:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:25.724981 | orchestrator | 2026-01-03 03:30:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:25.725060 | orchestrator | 2026-01-03 03:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:28.776038 | orchestrator | 2026-01-03 03:30:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:28.777075 | orchestrator | 2026-01-03 03:30:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:28.777120 | orchestrator | 2026-01-03 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:31.817778 | orchestrator | 2026-01-03 03:30:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:31.818824 | orchestrator | 2026-01-03 03:30:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:31.819061 | orchestrator | 2026-01-03 03:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:34.869615 | orchestrator | 2026-01-03 03:30:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:34.870214 | orchestrator | 2026-01-03 03:30:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:34.870289 | orchestrator | 2026-01-03 03:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:37.916012 | orchestrator | 2026-01-03 03:30:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:37.918357 | orchestrator | 2026-01-03 03:30:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:37.918508 | orchestrator | 2026-01-03 03:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:40.961617 | orchestrator | 2026-01-03 03:30:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:40.962662 | orchestrator | 2026-01-03 03:30:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:40.962696 | orchestrator | 2026-01-03 03:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:44.008911 | orchestrator | 2026-01-03 03:30:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:44.009979 | orchestrator | 2026-01-03 03:30:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:44.010172 | orchestrator | 2026-01-03 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:47.054787 | orchestrator | 2026-01-03 03:30:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:47.055670 | orchestrator | 2026-01-03 03:30:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:47.055756 | orchestrator | 2026-01-03 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:50.100009 | orchestrator | 2026-01-03 03:30:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:50.101127 | orchestrator | 2026-01-03 03:30:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:50.101164 | orchestrator | 2026-01-03 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:53.144888 | orchestrator | 2026-01-03 03:30:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:53.145136 | orchestrator | 2026-01-03 03:30:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:53.145162 | orchestrator | 2026-01-03 03:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:56.193778 | orchestrator | 2026-01-03 03:30:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:56.194483 | orchestrator | 2026-01-03 03:30:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:56.194513 | orchestrator | 2026-01-03 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:59.235793 | orchestrator | 2026-01-03 03:30:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:30:59.237790 | orchestrator | 2026-01-03 03:30:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:30:59.237843 | orchestrator | 2026-01-03 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:02.279362 | orchestrator | 2026-01-03 03:31:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:02.281376 | orchestrator | 2026-01-03 03:31:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:02.281499 | orchestrator | 2026-01-03 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:05.327918 | orchestrator | 2026-01-03 03:31:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:05.329147 | orchestrator | 2026-01-03 03:31:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:05.329197 | orchestrator | 2026-01-03 03:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:08.373893 | orchestrator | 2026-01-03 03:31:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:08.375800 | orchestrator | 2026-01-03 03:31:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:08.375858 | orchestrator | 2026-01-03 03:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:11.419044 | orchestrator | 2026-01-03 03:31:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:11.419782 | orchestrator | 2026-01-03 03:31:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:11.419834 | orchestrator | 2026-01-03 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:14.464906 | orchestrator | 2026-01-03 03:31:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:14.466315 | orchestrator | 2026-01-03 03:31:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:14.466365 | orchestrator | 2026-01-03 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:17.504761 | orchestrator | 2026-01-03 03:31:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:17.506259 | orchestrator | 2026-01-03 03:31:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:17.506317 | orchestrator | 2026-01-03 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:20.552903 | orchestrator | 2026-01-03 03:31:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:20.554339 | orchestrator | 2026-01-03 03:31:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:20.554494 | orchestrator | 2026-01-03 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:23.600675 | orchestrator | 2026-01-03 03:31:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:23.602393 | orchestrator | 2026-01-03 03:31:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:23.602484 | orchestrator | 2026-01-03 03:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:26.646388 | orchestrator | 2026-01-03 03:31:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:26.649062 | orchestrator | 2026-01-03 03:31:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:26.649151 | orchestrator | 2026-01-03 03:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:29.696910 | orchestrator | 2026-01-03 03:31:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:29.697874 | orchestrator | 2026-01-03 03:31:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:29.698157 | orchestrator | 2026-01-03 03:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:32.741396 | orchestrator | 2026-01-03 03:31:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:32.742653 | orchestrator | 2026-01-03 03:31:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:32.742718 | orchestrator | 2026-01-03 03:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:35.786604 | orchestrator | 2026-01-03 03:31:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:35.788231 | orchestrator | 2026-01-03 03:31:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:35.788275 | orchestrator | 2026-01-03 03:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:38.832515 | orchestrator | 2026-01-03 03:31:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:38.834331 | orchestrator | 2026-01-03 03:31:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:38.834463 | orchestrator | 2026-01-03 03:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:41.878667 | orchestrator | 2026-01-03 03:31:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:41.880791 | orchestrator | 2026-01-03 03:31:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:41.880860 | orchestrator | 2026-01-03 03:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:44.928498 | orchestrator | 2026-01-03 03:31:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:44.930195 | orchestrator | 2026-01-03 03:31:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:44.930277 | orchestrator | 2026-01-03 03:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:47.976396 | orchestrator | 2026-01-03 03:31:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:47.977981 | orchestrator | 2026-01-03 03:31:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:47.978139 | orchestrator | 2026-01-03 03:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:51.028080 | orchestrator | 2026-01-03 03:31:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:51.030325 | orchestrator | 2026-01-03 03:31:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:51.030375 | orchestrator | 2026-01-03 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:54.076740 | orchestrator | 2026-01-03 03:31:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:54.079422 | orchestrator | 2026-01-03 03:31:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:54.079526 | orchestrator | 2026-01-03 03:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:57.123828 | orchestrator | 2026-01-03 03:31:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:31:57.125480 | orchestrator | 2026-01-03 03:31:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:31:57.125499 | orchestrator | 2026-01-03 03:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:00.173203 | orchestrator | 2026-01-03 03:32:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:00.174579 | orchestrator | 2026-01-03 03:32:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:00.174848 | orchestrator | 2026-01-03 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:03.216426 | orchestrator | 2026-01-03 03:32:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:03.217670 | orchestrator | 2026-01-03 03:32:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:03.217711 | orchestrator | 2026-01-03 03:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:06.264807 | orchestrator | 2026-01-03 03:32:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:06.266688 | orchestrator | 2026-01-03 03:32:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:06.267048 | orchestrator | 2026-01-03 03:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:09.310826 | orchestrator | 2026-01-03 03:32:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:09.311895 | orchestrator | 2026-01-03 03:32:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:09.312090 | orchestrator | 2026-01-03 03:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:12.359072 | orchestrator | 2026-01-03 03:32:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:12.359180 | orchestrator | 2026-01-03 03:32:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:12.359581 | orchestrator | 2026-01-03 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:15.402284 | orchestrator | 2026-01-03 03:32:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:15.403842 | orchestrator | 2026-01-03 03:32:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:15.403921 | orchestrator | 2026-01-03 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:18.451179 | orchestrator | 2026-01-03 03:32:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:18.452569 | orchestrator | 2026-01-03 03:32:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:18.452623 | orchestrator | 2026-01-03 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:21.502928 | orchestrator | 2026-01-03 03:32:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:21.504441 | orchestrator | 2026-01-03 03:32:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:21.504568 | orchestrator | 2026-01-03 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:24.550819 | orchestrator | 2026-01-03 03:32:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:24.552540 | orchestrator | 2026-01-03 03:32:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:24.552606 | orchestrator | 2026-01-03 03:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:27.602099 | orchestrator | 2026-01-03 03:32:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:27.603531 | orchestrator | 2026-01-03 03:32:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:27.603572 | orchestrator | 2026-01-03 03:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:30.654608 | orchestrator | 2026-01-03 03:32:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:30.656302 | orchestrator | 2026-01-03 03:32:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:30.656671 | orchestrator | 2026-01-03 03:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:33.703851 | orchestrator | 2026-01-03 03:32:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:33.706680 | orchestrator | 2026-01-03 03:32:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:33.706750 | orchestrator | 2026-01-03 03:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:36.752910 | orchestrator | 2026-01-03 03:32:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:36.753026 | orchestrator | 2026-01-03 03:32:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:36.753096 | orchestrator | 2026-01-03 03:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:39.806465 | orchestrator | 2026-01-03 03:32:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:39.808675 | orchestrator | 2026-01-03 03:32:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:39.808812 | orchestrator | 2026-01-03 03:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:42.856067 | orchestrator | 2026-01-03 03:32:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:42.857363 | orchestrator | 2026-01-03 03:32:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:42.857459 | orchestrator | 2026-01-03 03:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:45.905119 | orchestrator | 2026-01-03 03:32:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:45.906729 | orchestrator | 2026-01-03 03:32:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:45.906783 | orchestrator | 2026-01-03 03:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:48.949053 | orchestrator | 2026-01-03 03:32:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:48.949210 | orchestrator | 2026-01-03 03:32:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:48.949227 | orchestrator | 2026-01-03 03:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:51.996615 | orchestrator | 2026-01-03 03:32:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:51.998098 | orchestrator | 2026-01-03 03:32:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:51.998172 | orchestrator | 2026-01-03 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:55.055427 | orchestrator | 2026-01-03 03:32:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:55.056842 | orchestrator | 2026-01-03 03:32:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:55.056987 | orchestrator | 2026-01-03 03:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:58.104232 | orchestrator | 2026-01-03 03:32:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:32:58.105611 | orchestrator | 2026-01-03 03:32:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:32:58.105656 | orchestrator | 2026-01-03 03:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:01.147899 | orchestrator | 2026-01-03 03:33:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:01.150932 | orchestrator | 2026-01-03 03:33:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:01.150999 | orchestrator | 2026-01-03 03:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:04.200496 | orchestrator | 2026-01-03 03:33:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:04.202079 | orchestrator | 2026-01-03 03:33:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:04.202121 | orchestrator | 2026-01-03 03:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:07.248474 | orchestrator | 2026-01-03 03:33:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:07.250903 | orchestrator | 2026-01-03 03:33:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:07.251025 | orchestrator | 2026-01-03 03:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:10.296105 | orchestrator | 2026-01-03 03:33:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:10.298442 | orchestrator | 2026-01-03 03:33:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:10.298500 | orchestrator | 2026-01-03 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:13.348078 | orchestrator | 2026-01-03 03:33:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:13.350679 | orchestrator | 2026-01-03 03:33:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:13.350875 | orchestrator | 2026-01-03 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:16.392853 | orchestrator | 2026-01-03 03:33:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:16.394434 | orchestrator | 2026-01-03 03:33:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:16.394611 | orchestrator | 2026-01-03 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:19.437317 | orchestrator | 2026-01-03 03:33:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:19.438700 | orchestrator | 2026-01-03 03:33:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:19.438748 | orchestrator | 2026-01-03 03:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:22.480090 | orchestrator | 2026-01-03 03:33:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:22.481605 | orchestrator | 2026-01-03 03:33:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:22.481642 | orchestrator | 2026-01-03 03:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:25.525532 | orchestrator | 2026-01-03 03:33:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:25.526883 | orchestrator | 2026-01-03 03:33:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:25.526923 | orchestrator | 2026-01-03 03:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:28.576705 | orchestrator | 2026-01-03 03:33:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:28.578722 | orchestrator | 2026-01-03 03:33:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:28.578885 | orchestrator | 2026-01-03 03:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:31.624499 | orchestrator | 2026-01-03 03:33:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:31.627129 | orchestrator | 2026-01-03 03:33:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:31.627211 | orchestrator | 2026-01-03 03:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:34.678443 | orchestrator | 2026-01-03 03:33:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:34.682232 | orchestrator | 2026-01-03 03:33:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:34.682322 | orchestrator | 2026-01-03 03:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:37.729483 | orchestrator | 2026-01-03 03:33:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:37.733532 | orchestrator | 2026-01-03 03:33:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:37.733639 | orchestrator | 2026-01-03 03:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:40.782916 | orchestrator | 2026-01-03 03:33:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:40.784101 | orchestrator | 2026-01-03 03:33:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:40.784148 | orchestrator | 2026-01-03 03:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:43.833081 | orchestrator | 2026-01-03 03:33:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:43.833993 | orchestrator | 2026-01-03 03:33:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:43.834109 | orchestrator | 2026-01-03 03:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:46.881078 | orchestrator | 2026-01-03 03:33:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:46.882328 | orchestrator | 2026-01-03 03:33:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:46.882470 | orchestrator | 2026-01-03 03:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:49.928142 | orchestrator | 2026-01-03 03:33:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:49.929857 | orchestrator | 2026-01-03 03:33:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:49.930202 | orchestrator | 2026-01-03 03:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:52.976937 | orchestrator | 2026-01-03 03:33:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:52.977478 | orchestrator | 2026-01-03 03:33:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:52.977516 | orchestrator | 2026-01-03 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:56.029952 | orchestrator | 2026-01-03 03:33:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:56.031996 | orchestrator | 2026-01-03 03:33:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:56.032079 | orchestrator | 2026-01-03 03:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:59.076387 | orchestrator | 2026-01-03 03:33:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:33:59.078112 | orchestrator | 2026-01-03 03:33:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:33:59.078202 | orchestrator | 2026-01-03 03:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:02.124112 | orchestrator | 2026-01-03 03:34:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:02.125642 | orchestrator | 2026-01-03 03:34:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:02.125683 | orchestrator | 2026-01-03 03:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:05.178289 | orchestrator | 2026-01-03 03:34:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:05.180188 | orchestrator | 2026-01-03 03:34:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:05.180284 | orchestrator | 2026-01-03 03:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:08.220568 | orchestrator | 2026-01-03 03:34:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:08.222115 | orchestrator | 2026-01-03 03:34:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:08.222176 | orchestrator | 2026-01-03 03:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:11.266173 | orchestrator | 2026-01-03 03:34:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:11.268741 | orchestrator | 2026-01-03 03:34:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:11.268822 | orchestrator | 2026-01-03 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:14.313273 | orchestrator | 2026-01-03 03:34:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:14.314529 | orchestrator | 2026-01-03 03:34:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:14.314703 | orchestrator | 2026-01-03 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:17.357162 | orchestrator | 2026-01-03 03:34:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:17.357581 | orchestrator | 2026-01-03 03:34:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:17.357665 | orchestrator | 2026-01-03 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:20.400212 | orchestrator | 2026-01-03 03:34:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:20.401353 | orchestrator | 2026-01-03 03:34:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:20.401371 | orchestrator | 2026-01-03 03:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:23.447103 | orchestrator | 2026-01-03 03:34:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:23.448591 | orchestrator | 2026-01-03 03:34:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:23.448638 | orchestrator | 2026-01-03 03:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:26.495192 | orchestrator | 2026-01-03 03:34:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:26.497434 | orchestrator | 2026-01-03 03:34:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:26.497595 | orchestrator | 2026-01-03 03:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:29.545814 | orchestrator | 2026-01-03 03:34:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:29.547351 | orchestrator | 2026-01-03 03:34:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:29.547384 | orchestrator | 2026-01-03 03:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:32.590996 | orchestrator | 2026-01-03 03:34:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:32.591989 | orchestrator | 2026-01-03 03:34:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:32.592073 | orchestrator | 2026-01-03 03:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:35.638155 | orchestrator | 2026-01-03 03:34:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:35.639942 | orchestrator | 2026-01-03 03:34:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:35.639981 | orchestrator | 2026-01-03 03:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:38.682970 | orchestrator | 2026-01-03 03:34:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:38.684764 | orchestrator | 2026-01-03 03:34:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:38.684808 | orchestrator | 2026-01-03 03:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:41.731563 | orchestrator | 2026-01-03 03:34:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:41.733816 | orchestrator | 2026-01-03 03:34:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:41.733907 | orchestrator | 2026-01-03 03:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:44.777400 | orchestrator | 2026-01-03 03:34:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:44.779165 | orchestrator | 2026-01-03 03:34:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:44.779207 | orchestrator | 2026-01-03 03:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:47.826812 | orchestrator | 2026-01-03 03:34:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:47.828045 | orchestrator | 2026-01-03 03:34:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:47.828099 | orchestrator | 2026-01-03 03:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:50.872952 | orchestrator | 2026-01-03 03:34:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:50.875006 | orchestrator | 2026-01-03 03:34:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:50.875079 | orchestrator | 2026-01-03 03:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:53.917870 | orchestrator | 2026-01-03 03:34:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:53.920018 | orchestrator | 2026-01-03 03:34:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:53.920268 | orchestrator | 2026-01-03 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:56.963491 | orchestrator | 2026-01-03 03:34:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:34:56.965356 | orchestrator | 2026-01-03 03:34:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:34:56.965513 | orchestrator | 2026-01-03 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:00.009260 | orchestrator | 2026-01-03 03:35:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:00.011081 | orchestrator | 2026-01-03 03:35:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:00.011170 | orchestrator | 2026-01-03 03:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:03.052158 | orchestrator | 2026-01-03 03:35:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:03.053296 | orchestrator | 2026-01-03 03:35:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:03.053347 | orchestrator | 2026-01-03 03:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:06.098304 | orchestrator | 2026-01-03 03:35:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:06.100604 | orchestrator | 2026-01-03 03:35:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:06.100661 | orchestrator | 2026-01-03 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:09.145879 | orchestrator | 2026-01-03 03:35:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:09.149054 | orchestrator | 2026-01-03 03:35:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:09.149157 | orchestrator | 2026-01-03 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:12.201186 | orchestrator | 2026-01-03 03:35:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:12.202559 | orchestrator | 2026-01-03 03:35:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:12.202668 | orchestrator | 2026-01-03 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:15.245610 | orchestrator | 2026-01-03 03:35:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:15.246116 | orchestrator | 2026-01-03 03:35:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:15.246340 | orchestrator | 2026-01-03 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:18.293061 | orchestrator | 2026-01-03 03:35:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:18.298750 | orchestrator | 2026-01-03 03:35:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:18.298882 | orchestrator | 2026-01-03 03:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:21.350538 | orchestrator | 2026-01-03 03:35:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:21.351983 | orchestrator | 2026-01-03 03:35:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:21.352039 | orchestrator | 2026-01-03 03:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:24.387276 | orchestrator | 2026-01-03 03:35:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:24.388611 | orchestrator | 2026-01-03 03:35:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:24.388641 | orchestrator | 2026-01-03 03:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:27.433274 | orchestrator | 2026-01-03 03:35:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:27.435623 | orchestrator | 2026-01-03 03:35:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:27.435884 | orchestrator | 2026-01-03 03:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:30.477182 | orchestrator | 2026-01-03 03:35:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:30.478685 | orchestrator | 2026-01-03 03:35:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:30.478745 | orchestrator | 2026-01-03 03:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:33.520949 | orchestrator | 2026-01-03 03:35:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:33.521672 | orchestrator | 2026-01-03 03:35:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:33.521736 | orchestrator | 2026-01-03 03:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:36.568168 | orchestrator | 2026-01-03 03:35:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:36.569622 | orchestrator | 2026-01-03 03:35:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:36.569667 | orchestrator | 2026-01-03 03:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:39.616999 | orchestrator | 2026-01-03 03:35:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:39.618631 | orchestrator | 2026-01-03 03:35:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:39.618720 | orchestrator | 2026-01-03 03:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:42.664363 | orchestrator | 2026-01-03 03:35:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:42.665561 | orchestrator | 2026-01-03 03:35:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:42.665613 | orchestrator | 2026-01-03 03:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:45.710122 | orchestrator | 2026-01-03 03:35:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:45.711982 | orchestrator | 2026-01-03 03:35:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:45.712020 | orchestrator | 2026-01-03 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:48.756333 | orchestrator | 2026-01-03 03:35:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:48.757783 | orchestrator | 2026-01-03 03:35:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:48.757910 | orchestrator | 2026-01-03 03:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:51.800577 | orchestrator | 2026-01-03 03:35:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:51.802945 | orchestrator | 2026-01-03 03:35:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:51.803040 | orchestrator | 2026-01-03 03:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:54.841522 | orchestrator | 2026-01-03 03:35:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:54.842199 | orchestrator | 2026-01-03 03:35:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:54.842248 | orchestrator | 2026-01-03 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:57.887985 | orchestrator | 2026-01-03 03:35:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:35:57.891098 | orchestrator | 2026-01-03 03:35:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:35:57.891190 | orchestrator | 2026-01-03 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:00.937109 | orchestrator | 2026-01-03 03:36:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:00.937623 | orchestrator | 2026-01-03 03:36:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:00.937663 | orchestrator | 2026-01-03 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:03.979814 | orchestrator | 2026-01-03 03:36:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:03.982211 | orchestrator | 2026-01-03 03:36:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:03.982273 | orchestrator | 2026-01-03 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:07.022588 | orchestrator | 2026-01-03 03:36:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:07.025048 | orchestrator | 2026-01-03 03:36:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:07.025093 | orchestrator | 2026-01-03 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:10.072120 | orchestrator | 2026-01-03 03:36:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:10.073519 | orchestrator | 2026-01-03 03:36:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:10.073690 | orchestrator | 2026-01-03 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:13.122517 | orchestrator | 2026-01-03 03:36:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:13.124478 | orchestrator | 2026-01-03 03:36:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:13.124534 | orchestrator | 2026-01-03 03:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:16.170295 | orchestrator | 2026-01-03 03:36:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:16.171952 | orchestrator | 2026-01-03 03:36:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:16.172009 | orchestrator | 2026-01-03 03:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:19.218513 | orchestrator | 2026-01-03 03:36:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:19.219909 | orchestrator | 2026-01-03 03:36:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:19.219990 | orchestrator | 2026-01-03 03:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:22.266695 | orchestrator | 2026-01-03 03:36:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:22.268366 | orchestrator | 2026-01-03 03:36:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:22.268671 | orchestrator | 2026-01-03 03:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:25.313960 | orchestrator | 2026-01-03 03:36:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:25.315373 | orchestrator | 2026-01-03 03:36:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:25.315480 | orchestrator | 2026-01-03 03:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:28.363824 | orchestrator | 2026-01-03 03:36:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:28.365713 | orchestrator | 2026-01-03 03:36:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:28.365988 | orchestrator | 2026-01-03 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:31.408653 | orchestrator | 2026-01-03 03:36:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:31.411100 | orchestrator | 2026-01-03 03:36:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:31.411181 | orchestrator | 2026-01-03 03:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:34.458483 | orchestrator | 2026-01-03 03:36:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:34.460738 | orchestrator | 2026-01-03 03:36:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:34.460814 | orchestrator | 2026-01-03 03:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:37.498093 | orchestrator | 2026-01-03 03:36:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:37.500497 | orchestrator | 2026-01-03 03:36:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:37.500644 | orchestrator | 2026-01-03 03:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:40.544958 | orchestrator | 2026-01-03 03:36:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:40.546796 | orchestrator | 2026-01-03 03:36:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:40.546856 | orchestrator | 2026-01-03 03:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:43.594362 | orchestrator | 2026-01-03 03:36:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:43.595513 | orchestrator | 2026-01-03 03:36:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:43.595555 | orchestrator | 2026-01-03 03:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:46.642347 | orchestrator | 2026-01-03 03:36:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:46.644180 | orchestrator | 2026-01-03 03:36:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:46.644261 | orchestrator | 2026-01-03 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:49.691354 | orchestrator | 2026-01-03 03:36:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:49.692190 | orchestrator | 2026-01-03 03:36:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:49.692222 | orchestrator | 2026-01-03 03:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:52.735669 | orchestrator | 2026-01-03 03:36:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:52.737386 | orchestrator | 2026-01-03 03:36:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:52.737472 | orchestrator | 2026-01-03 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:55.787844 | orchestrator | 2026-01-03 03:36:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:55.789869 | orchestrator | 2026-01-03 03:36:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:55.790164 | orchestrator | 2026-01-03 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:58.837951 | orchestrator | 2026-01-03 03:36:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:36:58.839343 | orchestrator | 2026-01-03 03:36:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:36:58.839534 | orchestrator | 2026-01-03 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:01.883407 | orchestrator | 2026-01-03 03:37:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:01.885018 | orchestrator | 2026-01-03 03:37:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:01.885092 | orchestrator | 2026-01-03 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:04.929486 | orchestrator | 2026-01-03 03:37:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:04.932230 | orchestrator | 2026-01-03 03:37:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:04.932290 | orchestrator | 2026-01-03 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:07.979699 | orchestrator | 2026-01-03 03:37:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:07.982285 | orchestrator | 2026-01-03 03:37:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:07.982395 | orchestrator | 2026-01-03 03:37:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:11.028186 | orchestrator | 2026-01-03 03:37:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:11.029839 | orchestrator | 2026-01-03 03:37:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:11.029887 | orchestrator | 2026-01-03 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:14.084036 | orchestrator | 2026-01-03 03:37:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:14.086312 | orchestrator | 2026-01-03 03:37:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:14.086405 | orchestrator | 2026-01-03 03:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:17.133470 | orchestrator | 2026-01-03 03:37:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:17.134860 | orchestrator | 2026-01-03 03:37:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:17.134905 | orchestrator | 2026-01-03 03:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:20.180668 | orchestrator | 2026-01-03 03:37:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:20.183228 | orchestrator | 2026-01-03 03:37:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:20.183279 | orchestrator | 2026-01-03 03:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:23.230410 | orchestrator | 2026-01-03 03:37:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:23.232092 | orchestrator | 2026-01-03 03:37:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:23.232124 | orchestrator | 2026-01-03 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:26.274687 | orchestrator | 2026-01-03 03:37:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:26.276129 | orchestrator | 2026-01-03 03:37:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:26.276306 | orchestrator | 2026-01-03 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:29.322666 | orchestrator | 2026-01-03 03:37:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:29.325010 | orchestrator | 2026-01-03 03:37:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:29.325086 | orchestrator | 2026-01-03 03:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:32.370150 | orchestrator | 2026-01-03 03:37:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:32.372365 | orchestrator | 2026-01-03 03:37:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:32.372404 | orchestrator | 2026-01-03 03:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:35.412625 | orchestrator | 2026-01-03 03:37:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:35.413903 | orchestrator | 2026-01-03 03:37:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:35.414061 | orchestrator | 2026-01-03 03:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:38.462448 | orchestrator | 2026-01-03 03:37:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:38.464237 | orchestrator | 2026-01-03 03:37:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:38.464388 | orchestrator | 2026-01-03 03:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:41.515892 | orchestrator | 2026-01-03 03:37:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:41.517967 | orchestrator | 2026-01-03 03:37:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:41.518170 | orchestrator | 2026-01-03 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:44.568741 | orchestrator | 2026-01-03 03:37:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:44.569824 | orchestrator | 2026-01-03 03:37:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:44.569963 | orchestrator | 2026-01-03 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:47.615336 | orchestrator | 2026-01-03 03:37:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:47.617124 | orchestrator | 2026-01-03 03:37:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:47.617171 | orchestrator | 2026-01-03 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:50.661425 | orchestrator | 2026-01-03 03:37:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:50.663860 | orchestrator | 2026-01-03 03:37:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:50.663919 | orchestrator | 2026-01-03 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:53.708089 | orchestrator | 2026-01-03 03:37:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:53.709929 | orchestrator | 2026-01-03 03:37:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:53.709972 | orchestrator | 2026-01-03 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:56.754318 | orchestrator | 2026-01-03 03:37:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:56.755712 | orchestrator | 2026-01-03 03:37:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:56.755791 | orchestrator | 2026-01-03 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:59.803961 | orchestrator | 2026-01-03 03:37:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:37:59.805825 | orchestrator | 2026-01-03 03:37:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:37:59.805888 | orchestrator | 2026-01-03 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:02.848132 | orchestrator | 2026-01-03 03:38:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:02.849816 | orchestrator | 2026-01-03 03:38:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:02.850205 | orchestrator | 2026-01-03 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:05.896901 | orchestrator | 2026-01-03 03:38:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:05.898505 | orchestrator | 2026-01-03 03:38:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:05.898576 | orchestrator | 2026-01-03 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:08.943817 | orchestrator | 2026-01-03 03:38:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:08.946460 | orchestrator | 2026-01-03 03:38:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:08.946559 | orchestrator | 2026-01-03 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:11.989870 | orchestrator | 2026-01-03 03:38:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:11.992695 | orchestrator | 2026-01-03 03:38:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:11.992727 | orchestrator | 2026-01-03 03:38:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:15.056434 | orchestrator | 2026-01-03 03:38:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:15.058331 | orchestrator | 2026-01-03 03:38:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:15.058378 | orchestrator | 2026-01-03 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:18.099551 | orchestrator | 2026-01-03 03:38:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:18.101059 | orchestrator | 2026-01-03 03:38:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:18.101173 | orchestrator | 2026-01-03 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:21.150111 | orchestrator | 2026-01-03 03:38:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:21.150241 | orchestrator | 2026-01-03 03:38:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:21.150321 | orchestrator | 2026-01-03 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:24.194340 | orchestrator | 2026-01-03 03:38:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:24.196306 | orchestrator | 2026-01-03 03:38:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:24.196645 | orchestrator | 2026-01-03 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:27.241600 | orchestrator | 2026-01-03 03:38:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:27.242475 | orchestrator | 2026-01-03 03:38:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:27.242530 | orchestrator | 2026-01-03 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:30.289251 | orchestrator | 2026-01-03 03:38:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:30.290798 | orchestrator | 2026-01-03 03:38:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:30.291423 | orchestrator | 2026-01-03 03:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:33.334256 | orchestrator | 2026-01-03 03:38:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:33.335891 | orchestrator | 2026-01-03 03:38:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:33.335927 | orchestrator | 2026-01-03 03:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:36.383449 | orchestrator | 2026-01-03 03:38:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:36.384691 | orchestrator | 2026-01-03 03:38:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:36.384720 | orchestrator | 2026-01-03 03:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:39.431658 | orchestrator | 2026-01-03 03:38:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:39.433120 | orchestrator | 2026-01-03 03:38:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:39.433301 | orchestrator | 2026-01-03 03:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:42.479577 | orchestrator | 2026-01-03 03:38:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:42.481532 | orchestrator | 2026-01-03 03:38:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:42.481636 | orchestrator | 2026-01-03 03:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:45.530766 | orchestrator | 2026-01-03 03:38:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:45.532167 | orchestrator | 2026-01-03 03:38:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:45.532200 | orchestrator | 2026-01-03 03:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:48.576211 | orchestrator | 2026-01-03 03:38:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:48.577479 | orchestrator | 2026-01-03 03:38:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:48.577570 | orchestrator | 2026-01-03 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:51.622147 | orchestrator | 2026-01-03 03:38:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:51.623996 | orchestrator | 2026-01-03 03:38:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:51.624133 | orchestrator | 2026-01-03 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:54.672282 | orchestrator | 2026-01-03 03:38:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:54.674320 | orchestrator | 2026-01-03 03:38:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:54.674628 | orchestrator | 2026-01-03 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:57.718226 | orchestrator | 2026-01-03 03:38:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:38:57.720834 | orchestrator | 2026-01-03 03:38:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:38:57.720890 | orchestrator | 2026-01-03 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:00.764463 | orchestrator | 2026-01-03 03:39:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:00.765966 | orchestrator | 2026-01-03 03:39:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:00.766117 | orchestrator | 2026-01-03 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:03.807770 | orchestrator | 2026-01-03 03:39:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:03.810481 | orchestrator | 2026-01-03 03:39:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:03.810668 | orchestrator | 2026-01-03 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:06.850717 | orchestrator | 2026-01-03 03:39:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:06.852476 | orchestrator | 2026-01-03 03:39:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:06.852621 | orchestrator | 2026-01-03 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:09.903700 | orchestrator | 2026-01-03 03:39:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:09.904565 | orchestrator | 2026-01-03 03:39:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:09.904748 | orchestrator | 2026-01-03 03:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:12.952458 | orchestrator | 2026-01-03 03:39:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:12.953960 | orchestrator | 2026-01-03 03:39:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:12.953992 | orchestrator | 2026-01-03 03:39:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:16.009753 | orchestrator | 2026-01-03 03:39:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:16.011718 | orchestrator | 2026-01-03 03:39:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:16.011806 | orchestrator | 2026-01-03 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:19.063187 | orchestrator | 2026-01-03 03:39:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:19.064521 | orchestrator | 2026-01-03 03:39:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:19.064700 | orchestrator | 2026-01-03 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:22.114532 | orchestrator | 2026-01-03 03:39:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:22.115764 | orchestrator | 2026-01-03 03:39:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:22.115842 | orchestrator | 2026-01-03 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:25.166825 | orchestrator | 2026-01-03 03:39:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:25.170091 | orchestrator | 2026-01-03 03:39:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:25.170296 | orchestrator | 2026-01-03 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:28.221559 | orchestrator | 2026-01-03 03:39:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:28.223049 | orchestrator | 2026-01-03 03:39:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:28.223082 | orchestrator | 2026-01-03 03:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:31.269913 | orchestrator | 2026-01-03 03:39:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:31.271061 | orchestrator | 2026-01-03 03:39:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:31.271189 | orchestrator | 2026-01-03 03:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:34.314740 | orchestrator | 2026-01-03 03:39:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:34.316997 | orchestrator | 2026-01-03 03:39:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:34.317122 | orchestrator | 2026-01-03 03:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:37.365027 | orchestrator | 2026-01-03 03:39:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:37.366289 | orchestrator | 2026-01-03 03:39:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:37.366352 | orchestrator | 2026-01-03 03:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:40.415757 | orchestrator | 2026-01-03 03:39:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:40.420075 | orchestrator | 2026-01-03 03:39:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:40.420185 | orchestrator | 2026-01-03 03:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:43.466607 | orchestrator | 2026-01-03 03:39:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:43.469113 | orchestrator | 2026-01-03 03:39:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:43.469205 | orchestrator | 2026-01-03 03:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:46.511002 | orchestrator | 2026-01-03 03:39:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:46.512693 | orchestrator | 2026-01-03 03:39:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:46.512752 | orchestrator | 2026-01-03 03:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:49.560380 | orchestrator | 2026-01-03 03:39:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:49.563600 | orchestrator | 2026-01-03 03:39:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:49.563684 | orchestrator | 2026-01-03 03:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:52.611462 | orchestrator | 2026-01-03 03:39:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:52.613584 | orchestrator | 2026-01-03 03:39:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:52.613681 | orchestrator | 2026-01-03 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:55.655456 | orchestrator | 2026-01-03 03:39:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:55.657567 | orchestrator | 2026-01-03 03:39:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:55.657757 | orchestrator | 2026-01-03 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:58.704855 | orchestrator | 2026-01-03 03:39:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:39:58.705999 | orchestrator | 2026-01-03 03:39:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:39:58.706069 | orchestrator | 2026-01-03 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:01.749615 | orchestrator | 2026-01-03 03:40:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:01.750968 | orchestrator | 2026-01-03 03:40:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:01.751025 | orchestrator | 2026-01-03 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:04.802441 | orchestrator | 2026-01-03 03:40:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:04.804303 | orchestrator | 2026-01-03 03:40:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:04.804387 | orchestrator | 2026-01-03 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:07.848249 | orchestrator | 2026-01-03 03:40:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:07.850106 | orchestrator | 2026-01-03 03:40:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:07.850130 | orchestrator | 2026-01-03 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:10.896801 | orchestrator | 2026-01-03 03:40:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:10.898346 | orchestrator | 2026-01-03 03:40:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:10.898404 | orchestrator | 2026-01-03 03:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:13.941715 | orchestrator | 2026-01-03 03:40:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:13.943852 | orchestrator | 2026-01-03 03:40:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:13.943932 | orchestrator | 2026-01-03 03:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:16.996592 | orchestrator | 2026-01-03 03:40:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:16.998303 | orchestrator | 2026-01-03 03:40:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:16.998406 | orchestrator | 2026-01-03 03:40:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:20.051188 | orchestrator | 2026-01-03 03:40:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:20.053113 | orchestrator | 2026-01-03 03:40:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:20.053295 | orchestrator | 2026-01-03 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:23.099874 | orchestrator | 2026-01-03 03:40:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:23.100377 | orchestrator | 2026-01-03 03:40:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:23.100423 | orchestrator | 2026-01-03 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:26.144487 | orchestrator | 2026-01-03 03:40:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:26.146298 | orchestrator | 2026-01-03 03:40:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:26.146389 | orchestrator | 2026-01-03 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:29.197324 | orchestrator | 2026-01-03 03:40:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:29.199422 | orchestrator | 2026-01-03 03:40:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:29.199467 | orchestrator | 2026-01-03 03:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:32.242365 | orchestrator | 2026-01-03 03:40:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:32.244017 | orchestrator | 2026-01-03 03:40:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:32.244070 | orchestrator | 2026-01-03 03:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:35.287152 | orchestrator | 2026-01-03 03:40:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:35.288360 | orchestrator | 2026-01-03 03:40:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:35.288439 | orchestrator | 2026-01-03 03:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:38.333181 | orchestrator | 2026-01-03 03:40:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:38.335248 | orchestrator | 2026-01-03 03:40:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:38.335286 | orchestrator | 2026-01-03 03:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:41.387522 | orchestrator | 2026-01-03 03:40:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:41.388801 | orchestrator | 2026-01-03 03:40:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:41.388854 | orchestrator | 2026-01-03 03:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:44.435989 | orchestrator | 2026-01-03 03:40:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:44.438114 | orchestrator | 2026-01-03 03:40:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:44.438169 | orchestrator | 2026-01-03 03:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:47.488772 | orchestrator | 2026-01-03 03:40:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:47.491026 | orchestrator | 2026-01-03 03:40:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:47.491111 | orchestrator | 2026-01-03 03:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:50.537710 | orchestrator | 2026-01-03 03:40:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:50.538905 | orchestrator | 2026-01-03 03:40:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:50.538960 | orchestrator | 2026-01-03 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:53.587633 | orchestrator | 2026-01-03 03:40:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:53.589596 | orchestrator | 2026-01-03 03:40:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:53.589656 | orchestrator | 2026-01-03 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:56.633622 | orchestrator | 2026-01-03 03:40:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:56.635526 | orchestrator | 2026-01-03 03:40:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:56.635566 | orchestrator | 2026-01-03 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:59.683064 | orchestrator | 2026-01-03 03:40:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:40:59.685353 | orchestrator | 2026-01-03 03:40:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:40:59.685423 | orchestrator | 2026-01-03 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:02.730740 | orchestrator | 2026-01-03 03:41:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:02.732908 | orchestrator | 2026-01-03 03:41:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:02.732959 | orchestrator | 2026-01-03 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:05.775694 | orchestrator | 2026-01-03 03:41:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:05.777816 | orchestrator | 2026-01-03 03:41:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:05.778110 | orchestrator | 2026-01-03 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:08.822330 | orchestrator | 2026-01-03 03:41:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:08.824379 | orchestrator | 2026-01-03 03:41:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:08.824548 | orchestrator | 2026-01-03 03:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:11.873694 | orchestrator | 2026-01-03 03:41:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:11.874796 | orchestrator | 2026-01-03 03:41:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:11.874862 | orchestrator | 2026-01-03 03:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:14.922168 | orchestrator | 2026-01-03 03:41:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:14.923711 | orchestrator | 2026-01-03 03:41:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:14.923756 | orchestrator | 2026-01-03 03:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:17.977537 | orchestrator | 2026-01-03 03:41:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:17.979759 | orchestrator | 2026-01-03 03:41:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:17.979892 | orchestrator | 2026-01-03 03:41:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:21.028480 | orchestrator | 2026-01-03 03:41:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:21.030185 | orchestrator | 2026-01-03 03:41:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:21.030382 | orchestrator | 2026-01-03 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:24.074571 | orchestrator | 2026-01-03 03:41:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:24.076150 | orchestrator | 2026-01-03 03:41:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:24.076265 | orchestrator | 2026-01-03 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:27.123994 | orchestrator | 2026-01-03 03:41:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:27.124939 | orchestrator | 2026-01-03 03:41:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:27.124973 | orchestrator | 2026-01-03 03:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:30.167795 | orchestrator | 2026-01-03 03:41:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:30.169262 | orchestrator | 2026-01-03 03:41:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:30.169336 | orchestrator | 2026-01-03 03:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:33.216882 | orchestrator | 2026-01-03 03:41:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:33.218264 | orchestrator | 2026-01-03 03:41:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:33.218497 | orchestrator | 2026-01-03 03:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:36.264725 | orchestrator | 2026-01-03 03:41:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:36.266687 | orchestrator | 2026-01-03 03:41:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:36.266730 | orchestrator | 2026-01-03 03:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:39.313954 | orchestrator | 2026-01-03 03:41:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:39.315762 | orchestrator | 2026-01-03 03:41:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:39.315844 | orchestrator | 2026-01-03 03:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:42.360413 | orchestrator | 2026-01-03 03:41:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:42.361538 | orchestrator | 2026-01-03 03:41:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:42.361590 | orchestrator | 2026-01-03 03:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:45.407772 | orchestrator | 2026-01-03 03:41:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:45.409411 | orchestrator | 2026-01-03 03:41:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:45.409475 | orchestrator | 2026-01-03 03:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:48.452502 | orchestrator | 2026-01-03 03:41:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:48.454379 | orchestrator | 2026-01-03 03:41:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:48.454920 | orchestrator | 2026-01-03 03:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:51.502206 | orchestrator | 2026-01-03 03:41:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:51.504027 | orchestrator | 2026-01-03 03:41:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:51.504053 | orchestrator | 2026-01-03 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:54.548567 | orchestrator | 2026-01-03 03:41:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:54.550064 | orchestrator | 2026-01-03 03:41:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:54.550220 | orchestrator | 2026-01-03 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:57.598706 | orchestrator | 2026-01-03 03:41:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:41:57.600401 | orchestrator | 2026-01-03 03:41:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:41:57.600445 | orchestrator | 2026-01-03 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:00.645784 | orchestrator | 2026-01-03 03:42:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:00.647040 | orchestrator | 2026-01-03 03:42:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:00.647242 | orchestrator | 2026-01-03 03:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:03.696456 | orchestrator | 2026-01-03 03:42:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:03.698698 | orchestrator | 2026-01-03 03:42:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:03.698792 | orchestrator | 2026-01-03 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:06.747364 | orchestrator | 2026-01-03 03:42:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:06.749246 | orchestrator | 2026-01-03 03:42:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:06.749371 | orchestrator | 2026-01-03 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:09.795728 | orchestrator | 2026-01-03 03:42:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:09.797291 | orchestrator | 2026-01-03 03:42:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:09.797393 | orchestrator | 2026-01-03 03:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:12.839482 | orchestrator | 2026-01-03 03:42:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:12.841431 | orchestrator | 2026-01-03 03:42:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:12.841478 | orchestrator | 2026-01-03 03:42:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:15.888903 | orchestrator | 2026-01-03 03:42:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:15.890586 | orchestrator | 2026-01-03 03:42:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:15.890668 | orchestrator | 2026-01-03 03:42:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:18.940667 | orchestrator | 2026-01-03 03:42:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:18.942393 | orchestrator | 2026-01-03 03:42:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:18.942474 | orchestrator | 2026-01-03 03:42:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:21.986786 | orchestrator | 2026-01-03 03:42:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:21.988315 | orchestrator | 2026-01-03 03:42:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:21.988525 | orchestrator | 2026-01-03 03:42:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:25.041486 | orchestrator | 2026-01-03 03:42:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:25.043884 | orchestrator | 2026-01-03 03:42:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:25.043934 | orchestrator | 2026-01-03 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:28.096518 | orchestrator | 2026-01-03 03:42:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:28.097875 | orchestrator | 2026-01-03 03:42:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:28.097915 | orchestrator | 2026-01-03 03:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:31.139579 | orchestrator | 2026-01-03 03:42:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:31.140474 | orchestrator | 2026-01-03 03:42:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:31.140529 | orchestrator | 2026-01-03 03:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:34.184357 | orchestrator | 2026-01-03 03:42:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:34.186234 | orchestrator | 2026-01-03 03:42:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:34.186291 | orchestrator | 2026-01-03 03:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:37.232676 | orchestrator | 2026-01-03 03:42:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:37.234331 | orchestrator | 2026-01-03 03:42:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:37.234420 | orchestrator | 2026-01-03 03:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:40.271783 | orchestrator | 2026-01-03 03:42:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:40.273025 | orchestrator | 2026-01-03 03:42:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:40.273058 | orchestrator | 2026-01-03 03:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:43.318459 | orchestrator | 2026-01-03 03:42:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:43.319351 | orchestrator | 2026-01-03 03:42:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:43.319421 | orchestrator | 2026-01-03 03:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:46.358598 | orchestrator | 2026-01-03 03:42:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:46.359867 | orchestrator | 2026-01-03 03:42:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:46.359983 | orchestrator | 2026-01-03 03:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:49.403516 | orchestrator | 2026-01-03 03:42:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:49.405015 | orchestrator | 2026-01-03 03:42:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:49.405321 | orchestrator | 2026-01-03 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:52.454467 | orchestrator | 2026-01-03 03:42:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:52.455958 | orchestrator | 2026-01-03 03:42:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:52.456076 | orchestrator | 2026-01-03 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:55.498938 | orchestrator | 2026-01-03 03:42:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:55.502284 | orchestrator | 2026-01-03 03:42:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:55.502550 | orchestrator | 2026-01-03 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:58.549867 | orchestrator | 2026-01-03 03:42:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:42:58.551081 | orchestrator | 2026-01-03 03:42:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:42:58.551123 | orchestrator | 2026-01-03 03:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:01.602634 | orchestrator | 2026-01-03 03:43:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:01.606440 | orchestrator | 2026-01-03 03:43:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:01.606507 | orchestrator | 2026-01-03 03:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:04.651205 | orchestrator | 2026-01-03 03:43:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:04.653720 | orchestrator | 2026-01-03 03:43:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:04.653785 | orchestrator | 2026-01-03 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:07.700236 | orchestrator | 2026-01-03 03:43:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:07.701635 | orchestrator | 2026-01-03 03:43:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:07.701689 | orchestrator | 2026-01-03 03:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:10.744233 | orchestrator | 2026-01-03 03:43:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:10.746765 | orchestrator | 2026-01-03 03:43:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:10.746831 | orchestrator | 2026-01-03 03:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:13.789067 | orchestrator | 2026-01-03 03:43:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:13.790769 | orchestrator | 2026-01-03 03:43:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:13.790840 | orchestrator | 2026-01-03 03:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:16.838300 | orchestrator | 2026-01-03 03:43:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:16.840503 | orchestrator | 2026-01-03 03:43:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:16.840561 | orchestrator | 2026-01-03 03:43:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:19.883859 | orchestrator | 2026-01-03 03:43:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:19.885863 | orchestrator | 2026-01-03 03:43:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:19.885973 | orchestrator | 2026-01-03 03:43:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:22.929758 | orchestrator | 2026-01-03 03:43:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:22.931830 | orchestrator | 2026-01-03 03:43:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:22.931898 | orchestrator | 2026-01-03 03:43:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:25.974721 | orchestrator | 2026-01-03 03:43:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:25.976221 | orchestrator | 2026-01-03 03:43:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:25.976265 | orchestrator | 2026-01-03 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:29.028779 | orchestrator | 2026-01-03 03:43:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:29.031481 | orchestrator | 2026-01-03 03:43:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:29.031635 | orchestrator | 2026-01-03 03:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:32.077737 | orchestrator | 2026-01-03 03:43:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:32.078802 | orchestrator | 2026-01-03 03:43:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:32.078886 | orchestrator | 2026-01-03 03:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:35.123418 | orchestrator | 2026-01-03 03:43:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:35.125365 | orchestrator | 2026-01-03 03:43:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:35.125679 | orchestrator | 2026-01-03 03:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:38.173638 | orchestrator | 2026-01-03 03:43:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:38.174794 | orchestrator | 2026-01-03 03:43:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:38.174817 | orchestrator | 2026-01-03 03:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:41.220854 | orchestrator | 2026-01-03 03:43:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:41.221749 | orchestrator | 2026-01-03 03:43:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:41.221918 | orchestrator | 2026-01-03 03:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:44.265351 | orchestrator | 2026-01-03 03:43:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:44.266684 | orchestrator | 2026-01-03 03:43:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:44.266732 | orchestrator | 2026-01-03 03:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:47.310801 | orchestrator | 2026-01-03 03:43:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:47.312391 | orchestrator | 2026-01-03 03:43:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:47.312577 | orchestrator | 2026-01-03 03:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:50.354065 | orchestrator | 2026-01-03 03:43:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:50.354826 | orchestrator | 2026-01-03 03:43:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:50.355009 | orchestrator | 2026-01-03 03:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:53.401938 | orchestrator | 2026-01-03 03:43:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:53.403425 | orchestrator | 2026-01-03 03:43:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:53.403575 | orchestrator | 2026-01-03 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:56.449984 | orchestrator | 2026-01-03 03:43:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:56.451445 | orchestrator | 2026-01-03 03:43:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:56.451542 | orchestrator | 2026-01-03 03:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:59.496676 | orchestrator | 2026-01-03 03:43:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:43:59.498297 | orchestrator | 2026-01-03 03:43:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:43:59.498431 | orchestrator | 2026-01-03 03:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:02.545636 | orchestrator | 2026-01-03 03:44:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:02.547571 | orchestrator | 2026-01-03 03:44:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:02.547644 | orchestrator | 2026-01-03 03:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:05.588940 | orchestrator | 2026-01-03 03:44:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:05.590609 | orchestrator | 2026-01-03 03:44:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:05.590773 | orchestrator | 2026-01-03 03:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:08.635643 | orchestrator | 2026-01-03 03:44:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:08.637868 | orchestrator | 2026-01-03 03:44:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:08.637921 | orchestrator | 2026-01-03 03:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:11.683184 | orchestrator | 2026-01-03 03:44:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:11.685077 | orchestrator | 2026-01-03 03:44:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:11.685193 | orchestrator | 2026-01-03 03:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:14.733398 | orchestrator | 2026-01-03 03:44:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:14.734961 | orchestrator | 2026-01-03 03:44:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:14.735134 | orchestrator | 2026-01-03 03:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:17.780007 | orchestrator | 2026-01-03 03:44:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:17.783942 | orchestrator | 2026-01-03 03:44:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:17.784078 | orchestrator | 2026-01-03 03:44:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:20.832192 | orchestrator | 2026-01-03 03:44:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:20.834456 | orchestrator | 2026-01-03 03:44:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:20.834577 | orchestrator | 2026-01-03 03:44:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:23.882393 | orchestrator | 2026-01-03 03:44:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:23.884040 | orchestrator | 2026-01-03 03:44:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:23.884105 | orchestrator | 2026-01-03 03:44:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:26.935704 | orchestrator | 2026-01-03 03:44:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:26.936603 | orchestrator | 2026-01-03 03:44:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:26.937042 | orchestrator | 2026-01-03 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:29.987842 | orchestrator | 2026-01-03 03:44:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:29.989436 | orchestrator | 2026-01-03 03:44:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:29.989502 | orchestrator | 2026-01-03 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:33.037014 | orchestrator | 2026-01-03 03:44:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:33.039117 | orchestrator | 2026-01-03 03:44:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:33.039186 | orchestrator | 2026-01-03 03:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:36.088455 | orchestrator | 2026-01-03 03:44:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:36.091875 | orchestrator | 2026-01-03 03:44:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:36.091931 | orchestrator | 2026-01-03 03:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:39.140753 | orchestrator | 2026-01-03 03:44:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:39.142199 | orchestrator | 2026-01-03 03:44:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:39.142274 | orchestrator | 2026-01-03 03:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:42.193038 | orchestrator | 2026-01-03 03:44:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:42.194682 | orchestrator | 2026-01-03 03:44:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:42.194728 | orchestrator | 2026-01-03 03:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:45.243877 | orchestrator | 2026-01-03 03:44:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:45.244691 | orchestrator | 2026-01-03 03:44:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:45.244837 | orchestrator | 2026-01-03 03:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:48.291491 | orchestrator | 2026-01-03 03:44:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:48.294228 | orchestrator | 2026-01-03 03:44:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:48.294283 | orchestrator | 2026-01-03 03:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:51.343770 | orchestrator | 2026-01-03 03:44:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:51.344970 | orchestrator | 2026-01-03 03:44:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:51.345022 | orchestrator | 2026-01-03 03:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:54.390386 | orchestrator | 2026-01-03 03:44:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:54.391439 | orchestrator | 2026-01-03 03:44:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:54.391469 | orchestrator | 2026-01-03 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:57.434845 | orchestrator | 2026-01-03 03:44:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:44:57.436446 | orchestrator | 2026-01-03 03:44:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:44:57.436503 | orchestrator | 2026-01-03 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:00.482423 | orchestrator | 2026-01-03 03:45:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:00.484343 | orchestrator | 2026-01-03 03:45:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:00.484404 | orchestrator | 2026-01-03 03:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:03.533801 | orchestrator | 2026-01-03 03:45:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:03.535472 | orchestrator | 2026-01-03 03:45:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:03.535635 | orchestrator | 2026-01-03 03:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:06.583805 | orchestrator | 2026-01-03 03:45:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:06.584938 | orchestrator | 2026-01-03 03:45:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:06.585005 | orchestrator | 2026-01-03 03:45:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:09.627814 | orchestrator | 2026-01-03 03:45:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:09.630335 | orchestrator | 2026-01-03 03:45:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:09.630393 | orchestrator | 2026-01-03 03:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:12.675781 | orchestrator | 2026-01-03 03:45:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:12.678610 | orchestrator | 2026-01-03 03:45:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:12.678698 | orchestrator | 2026-01-03 03:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:15.724605 | orchestrator | 2026-01-03 03:45:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:15.725978 | orchestrator | 2026-01-03 03:45:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:15.726091 | orchestrator | 2026-01-03 03:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:18.771012 | orchestrator | 2026-01-03 03:45:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:18.773123 | orchestrator | 2026-01-03 03:45:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:18.773435 | orchestrator | 2026-01-03 03:45:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:21.815989 | orchestrator | 2026-01-03 03:45:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:21.818366 | orchestrator | 2026-01-03 03:45:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:21.818430 | orchestrator | 2026-01-03 03:45:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:24.862792 | orchestrator | 2026-01-03 03:45:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:24.864948 | orchestrator | 2026-01-03 03:45:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:24.864996 | orchestrator | 2026-01-03 03:45:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:27.910289 | orchestrator | 2026-01-03 03:45:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:27.912439 | orchestrator | 2026-01-03 03:45:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:27.912504 | orchestrator | 2026-01-03 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:30.960982 | orchestrator | 2026-01-03 03:45:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:30.964048 | orchestrator | 2026-01-03 03:45:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:30.964105 | orchestrator | 2026-01-03 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:34.009189 | orchestrator | 2026-01-03 03:45:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:34.010897 | orchestrator | 2026-01-03 03:45:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:34.010955 | orchestrator | 2026-01-03 03:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:37.057281 | orchestrator | 2026-01-03 03:45:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:37.058868 | orchestrator | 2026-01-03 03:45:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:37.058938 | orchestrator | 2026-01-03 03:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:40.101722 | orchestrator | 2026-01-03 03:45:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:40.102928 | orchestrator | 2026-01-03 03:45:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:40.103093 | orchestrator | 2026-01-03 03:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:43.147226 | orchestrator | 2026-01-03 03:45:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:43.148572 | orchestrator | 2026-01-03 03:45:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:43.148729 | orchestrator | 2026-01-03 03:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:46.198520 | orchestrator | 2026-01-03 03:45:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:46.200402 | orchestrator | 2026-01-03 03:45:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:46.200441 | orchestrator | 2026-01-03 03:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:49.247295 | orchestrator | 2026-01-03 03:45:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:49.249699 | orchestrator | 2026-01-03 03:45:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:49.249773 | orchestrator | 2026-01-03 03:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:52.294960 | orchestrator | 2026-01-03 03:45:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:52.295744 | orchestrator | 2026-01-03 03:45:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:52.295784 | orchestrator | 2026-01-03 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:55.335499 | orchestrator | 2026-01-03 03:45:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:55.337444 | orchestrator | 2026-01-03 03:45:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:55.337520 | orchestrator | 2026-01-03 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:58.378720 | orchestrator | 2026-01-03 03:45:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:45:58.380519 | orchestrator | 2026-01-03 03:45:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:45:58.380571 | orchestrator | 2026-01-03 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:01.424288 | orchestrator | 2026-01-03 03:46:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:01.426359 | orchestrator | 2026-01-03 03:46:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:01.426680 | orchestrator | 2026-01-03 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:04.471025 | orchestrator | 2026-01-03 03:46:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:04.474367 | orchestrator | 2026-01-03 03:46:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:04.474705 | orchestrator | 2026-01-03 03:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:07.513819 | orchestrator | 2026-01-03 03:46:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:07.515966 | orchestrator | 2026-01-03 03:46:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:07.516012 | orchestrator | 2026-01-03 03:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:10.564498 | orchestrator | 2026-01-03 03:46:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:10.566547 | orchestrator | 2026-01-03 03:46:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:10.566939 | orchestrator | 2026-01-03 03:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:13.617453 | orchestrator | 2026-01-03 03:46:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:13.618974 | orchestrator | 2026-01-03 03:46:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:13.619053 | orchestrator | 2026-01-03 03:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:16.664434 | orchestrator | 2026-01-03 03:46:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:16.666869 | orchestrator | 2026-01-03 03:46:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:16.666939 | orchestrator | 2026-01-03 03:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:19.710717 | orchestrator | 2026-01-03 03:46:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:19.712489 | orchestrator | 2026-01-03 03:46:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:19.712583 | orchestrator | 2026-01-03 03:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:22.756740 | orchestrator | 2026-01-03 03:46:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:22.758693 | orchestrator | 2026-01-03 03:46:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:22.758756 | orchestrator | 2026-01-03 03:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:25.804890 | orchestrator | 2026-01-03 03:46:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:25.807305 | orchestrator | 2026-01-03 03:46:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:25.807351 | orchestrator | 2026-01-03 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:28.856861 | orchestrator | 2026-01-03 03:46:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:28.858840 | orchestrator | 2026-01-03 03:46:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:28.858891 | orchestrator | 2026-01-03 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:31.910638 | orchestrator | 2026-01-03 03:46:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:31.912878 | orchestrator | 2026-01-03 03:46:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:31.913034 | orchestrator | 2026-01-03 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:34.958296 | orchestrator | 2026-01-03 03:46:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:34.961156 | orchestrator | 2026-01-03 03:46:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:34.961260 | orchestrator | 2026-01-03 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:38.011127 | orchestrator | 2026-01-03 03:46:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:38.014105 | orchestrator | 2026-01-03 03:46:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:38.014190 | orchestrator | 2026-01-03 03:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:41.059558 | orchestrator | 2026-01-03 03:46:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:41.061586 | orchestrator | 2026-01-03 03:46:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:41.061716 | orchestrator | 2026-01-03 03:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:44.102236 | orchestrator | 2026-01-03 03:46:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:44.103948 | orchestrator | 2026-01-03 03:46:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:44.103997 | orchestrator | 2026-01-03 03:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:47.150696 | orchestrator | 2026-01-03 03:46:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:47.151644 | orchestrator | 2026-01-03 03:46:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:47.151828 | orchestrator | 2026-01-03 03:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:50.198202 | orchestrator | 2026-01-03 03:46:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:50.199940 | orchestrator | 2026-01-03 03:46:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:50.200016 | orchestrator | 2026-01-03 03:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:53.235467 | orchestrator | 2026-01-03 03:46:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:53.236676 | orchestrator | 2026-01-03 03:46:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:53.236742 | orchestrator | 2026-01-03 03:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:56.268717 | orchestrator | 2026-01-03 03:46:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:56.269835 | orchestrator | 2026-01-03 03:46:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:56.269869 | orchestrator | 2026-01-03 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:59.313458 | orchestrator | 2026-01-03 03:46:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:46:59.315003 | orchestrator | 2026-01-03 03:46:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:46:59.315079 | orchestrator | 2026-01-03 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:02.363617 | orchestrator | 2026-01-03 03:47:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:02.364934 | orchestrator | 2026-01-03 03:47:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:02.365019 | orchestrator | 2026-01-03 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:05.413011 | orchestrator | 2026-01-03 03:47:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:05.415275 | orchestrator | 2026-01-03 03:47:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:05.415361 | orchestrator | 2026-01-03 03:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:08.461237 | orchestrator | 2026-01-03 03:47:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:08.462359 | orchestrator | 2026-01-03 03:47:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:08.462533 | orchestrator | 2026-01-03 03:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:11.501179 | orchestrator | 2026-01-03 03:47:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:11.501779 | orchestrator | 2026-01-03 03:47:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:11.501811 | orchestrator | 2026-01-03 03:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:14.544307 | orchestrator | 2026-01-03 03:47:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:14.546294 | orchestrator | 2026-01-03 03:47:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:14.546355 | orchestrator | 2026-01-03 03:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:17.593780 | orchestrator | 2026-01-03 03:47:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:17.597152 | orchestrator | 2026-01-03 03:47:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:17.597219 | orchestrator | 2026-01-03 03:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:20.639977 | orchestrator | 2026-01-03 03:47:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:20.642517 | orchestrator | 2026-01-03 03:47:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:20.642621 | orchestrator | 2026-01-03 03:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:23.687178 | orchestrator | 2026-01-03 03:47:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:23.690607 | orchestrator | 2026-01-03 03:47:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:23.690691 | orchestrator | 2026-01-03 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:26.736553 | orchestrator | 2026-01-03 03:47:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:26.741423 | orchestrator | 2026-01-03 03:47:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:26.741503 | orchestrator | 2026-01-03 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:29.786010 | orchestrator | 2026-01-03 03:47:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:29.790561 | orchestrator | 2026-01-03 03:47:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:29.790665 | orchestrator | 2026-01-03 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:32.841897 | orchestrator | 2026-01-03 03:47:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:32.844289 | orchestrator | 2026-01-03 03:47:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:32.844380 | orchestrator | 2026-01-03 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:35.893618 | orchestrator | 2026-01-03 03:47:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:35.895111 | orchestrator | 2026-01-03 03:47:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:35.895188 | orchestrator | 2026-01-03 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:38.946951 | orchestrator | 2026-01-03 03:47:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:38.948662 | orchestrator | 2026-01-03 03:47:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:38.948939 | orchestrator | 2026-01-03 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:41.995910 | orchestrator | 2026-01-03 03:47:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:41.999388 | orchestrator | 2026-01-03 03:47:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:41.999460 | orchestrator | 2026-01-03 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:45.063191 | orchestrator | 2026-01-03 03:47:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:45.065807 | orchestrator | 2026-01-03 03:47:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:45.065879 | orchestrator | 2026-01-03 03:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:48.113599 | orchestrator | 2026-01-03 03:47:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:48.113993 | orchestrator | 2026-01-03 03:47:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:48.114232 | orchestrator | 2026-01-03 03:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:51.162106 | orchestrator | 2026-01-03 03:47:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:51.164078 | orchestrator | 2026-01-03 03:47:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:51.164157 | orchestrator | 2026-01-03 03:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:54.209821 | orchestrator | 2026-01-03 03:47:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:54.212421 | orchestrator | 2026-01-03 03:47:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:54.212517 | orchestrator | 2026-01-03 03:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:57.261915 | orchestrator | 2026-01-03 03:47:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:47:57.263670 | orchestrator | 2026-01-03 03:47:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:47:57.263865 | orchestrator | 2026-01-03 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:00.310625 | orchestrator | 2026-01-03 03:48:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:00.312069 | orchestrator | 2026-01-03 03:48:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:00.312105 | orchestrator | 2026-01-03 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:03.360844 | orchestrator | 2026-01-03 03:48:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:03.362071 | orchestrator | 2026-01-03 03:48:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:03.362116 | orchestrator | 2026-01-03 03:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:06.406589 | orchestrator | 2026-01-03 03:48:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:06.408180 | orchestrator | 2026-01-03 03:48:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:06.408236 | orchestrator | 2026-01-03 03:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:09.453639 | orchestrator | 2026-01-03 03:48:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:09.455340 | orchestrator | 2026-01-03 03:48:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:09.455397 | orchestrator | 2026-01-03 03:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:12.497028 | orchestrator | 2026-01-03 03:48:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:12.498489 | orchestrator | 2026-01-03 03:48:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:12.498540 | orchestrator | 2026-01-03 03:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:15.539358 | orchestrator | 2026-01-03 03:48:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:15.540290 | orchestrator | 2026-01-03 03:48:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:15.540320 | orchestrator | 2026-01-03 03:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:18.589852 | orchestrator | 2026-01-03 03:48:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:18.591282 | orchestrator | 2026-01-03 03:48:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:18.591358 | orchestrator | 2026-01-03 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:21.636265 | orchestrator | 2026-01-03 03:48:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:21.638274 | orchestrator | 2026-01-03 03:48:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:21.638340 | orchestrator | 2026-01-03 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:24.689439 | orchestrator | 2026-01-03 03:48:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:24.691605 | orchestrator | 2026-01-03 03:48:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:24.691661 | orchestrator | 2026-01-03 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:27.734916 | orchestrator | 2026-01-03 03:48:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:27.739312 | orchestrator | 2026-01-03 03:48:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:27.739371 | orchestrator | 2026-01-03 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:30.779650 | orchestrator | 2026-01-03 03:48:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:30.781341 | orchestrator | 2026-01-03 03:48:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:30.781419 | orchestrator | 2026-01-03 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:33.825168 | orchestrator | 2026-01-03 03:48:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:33.827474 | orchestrator | 2026-01-03 03:48:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:33.827597 | orchestrator | 2026-01-03 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:36.869981 | orchestrator | 2026-01-03 03:48:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:36.871296 | orchestrator | 2026-01-03 03:48:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:36.871333 | orchestrator | 2026-01-03 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:39.920731 | orchestrator | 2026-01-03 03:48:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:39.922695 | orchestrator | 2026-01-03 03:48:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:39.922832 | orchestrator | 2026-01-03 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:42.967719 | orchestrator | 2026-01-03 03:48:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:42.968925 | orchestrator | 2026-01-03 03:48:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:42.969030 | orchestrator | 2026-01-03 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:46.012089 | orchestrator | 2026-01-03 03:48:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:46.014180 | orchestrator | 2026-01-03 03:48:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:46.014237 | orchestrator | 2026-01-03 03:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:49.061314 | orchestrator | 2026-01-03 03:48:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:49.063740 | orchestrator | 2026-01-03 03:48:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:49.063784 | orchestrator | 2026-01-03 03:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:52.110291 | orchestrator | 2026-01-03 03:48:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:52.110373 | orchestrator | 2026-01-03 03:48:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:52.110382 | orchestrator | 2026-01-03 03:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:55.145045 | orchestrator | 2026-01-03 03:48:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:55.146251 | orchestrator | 2026-01-03 03:48:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:55.146296 | orchestrator | 2026-01-03 03:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:58.196494 | orchestrator | 2026-01-03 03:48:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:48:58.197659 | orchestrator | 2026-01-03 03:48:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:48:58.197702 | orchestrator | 2026-01-03 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:01.243094 | orchestrator | 2026-01-03 03:49:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:01.245765 | orchestrator | 2026-01-03 03:49:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:01.245848 | orchestrator | 2026-01-03 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:04.294301 | orchestrator | 2026-01-03 03:49:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:04.295518 | orchestrator | 2026-01-03 03:49:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:04.295592 | orchestrator | 2026-01-03 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:07.343530 | orchestrator | 2026-01-03 03:49:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:07.347221 | orchestrator | 2026-01-03 03:49:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:07.347276 | orchestrator | 2026-01-03 03:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:10.393863 | orchestrator | 2026-01-03 03:49:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:10.395112 | orchestrator | 2026-01-03 03:49:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:10.395164 | orchestrator | 2026-01-03 03:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:13.444505 | orchestrator | 2026-01-03 03:49:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:13.447226 | orchestrator | 2026-01-03 03:49:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:13.447306 | orchestrator | 2026-01-03 03:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:16.494297 | orchestrator | 2026-01-03 03:49:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:16.496174 | orchestrator | 2026-01-03 03:49:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:16.496236 | orchestrator | 2026-01-03 03:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:19.545356 | orchestrator | 2026-01-03 03:49:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:19.546961 | orchestrator | 2026-01-03 03:49:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:19.547053 | orchestrator | 2026-01-03 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:22.588404 | orchestrator | 2026-01-03 03:49:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:22.588734 | orchestrator | 2026-01-03 03:49:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:22.588775 | orchestrator | 2026-01-03 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:25.632843 | orchestrator | 2026-01-03 03:49:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:25.634191 | orchestrator | 2026-01-03 03:49:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:25.634244 | orchestrator | 2026-01-03 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:28.685404 | orchestrator | 2026-01-03 03:49:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:28.687214 | orchestrator | 2026-01-03 03:49:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:28.687302 | orchestrator | 2026-01-03 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:31.730594 | orchestrator | 2026-01-03 03:49:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:31.732189 | orchestrator | 2026-01-03 03:49:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:31.732233 | orchestrator | 2026-01-03 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:34.778496 | orchestrator | 2026-01-03 03:49:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:34.779404 | orchestrator | 2026-01-03 03:49:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:34.779441 | orchestrator | 2026-01-03 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:37.826781 | orchestrator | 2026-01-03 03:49:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:37.828834 | orchestrator | 2026-01-03 03:49:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:37.828903 | orchestrator | 2026-01-03 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:40.874979 | orchestrator | 2026-01-03 03:49:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:40.876813 | orchestrator | 2026-01-03 03:49:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:40.876867 | orchestrator | 2026-01-03 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:43.923525 | orchestrator | 2026-01-03 03:49:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:43.926079 | orchestrator | 2026-01-03 03:49:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:43.926145 | orchestrator | 2026-01-03 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:46.972480 | orchestrator | 2026-01-03 03:49:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:46.975156 | orchestrator | 2026-01-03 03:49:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:46.975207 | orchestrator | 2026-01-03 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:50.023233 | orchestrator | 2026-01-03 03:49:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:50.024158 | orchestrator | 2026-01-03 03:49:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:50.024201 | orchestrator | 2026-01-03 03:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:53.065937 | orchestrator | 2026-01-03 03:49:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:53.067939 | orchestrator | 2026-01-03 03:49:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:53.068041 | orchestrator | 2026-01-03 03:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:56.114788 | orchestrator | 2026-01-03 03:49:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:56.117073 | orchestrator | 2026-01-03 03:49:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:56.117125 | orchestrator | 2026-01-03 03:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:59.170394 | orchestrator | 2026-01-03 03:49:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:49:59.171421 | orchestrator | 2026-01-03 03:49:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:49:59.171989 | orchestrator | 2026-01-03 03:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:02.218392 | orchestrator | 2026-01-03 03:50:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:02.220006 | orchestrator | 2026-01-03 03:50:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:02.220104 | orchestrator | 2026-01-03 03:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:05.271643 | orchestrator | 2026-01-03 03:50:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:05.272145 | orchestrator | 2026-01-03 03:50:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:05.272167 | orchestrator | 2026-01-03 03:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:08.317268 | orchestrator | 2026-01-03 03:50:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:08.318879 | orchestrator | 2026-01-03 03:50:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:08.318932 | orchestrator | 2026-01-03 03:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:11.363977 | orchestrator | 2026-01-03 03:50:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:11.365194 | orchestrator | 2026-01-03 03:50:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:11.365252 | orchestrator | 2026-01-03 03:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:14.411457 | orchestrator | 2026-01-03 03:50:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:14.413914 | orchestrator | 2026-01-03 03:50:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:14.413973 | orchestrator | 2026-01-03 03:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:17.460065 | orchestrator | 2026-01-03 03:50:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:17.462194 | orchestrator | 2026-01-03 03:50:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:17.462272 | orchestrator | 2026-01-03 03:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:20.509007 | orchestrator | 2026-01-03 03:50:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:20.510729 | orchestrator | 2026-01-03 03:50:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:20.510780 | orchestrator | 2026-01-03 03:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:23.549933 | orchestrator | 2026-01-03 03:50:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:23.552188 | orchestrator | 2026-01-03 03:50:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:23.552235 | orchestrator | 2026-01-03 03:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:26.593100 | orchestrator | 2026-01-03 03:50:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:26.595203 | orchestrator | 2026-01-03 03:50:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:26.595263 | orchestrator | 2026-01-03 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:29.644182 | orchestrator | 2026-01-03 03:50:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:29.645337 | orchestrator | 2026-01-03 03:50:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:29.645432 | orchestrator | 2026-01-03 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:32.692043 | orchestrator | 2026-01-03 03:50:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:32.693965 | orchestrator | 2026-01-03 03:50:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:32.694080 | orchestrator | 2026-01-03 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:35.745590 | orchestrator | 2026-01-03 03:50:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:35.747084 | orchestrator | 2026-01-03 03:50:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:35.747170 | orchestrator | 2026-01-03 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:38.792059 | orchestrator | 2026-01-03 03:50:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:38.793333 | orchestrator | 2026-01-03 03:50:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:38.793382 | orchestrator | 2026-01-03 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:41.837070 | orchestrator | 2026-01-03 03:50:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:41.838746 | orchestrator | 2026-01-03 03:50:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:41.838809 | orchestrator | 2026-01-03 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:44.883653 | orchestrator | 2026-01-03 03:50:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:44.884806 | orchestrator | 2026-01-03 03:50:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:44.884846 | orchestrator | 2026-01-03 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:47.927414 | orchestrator | 2026-01-03 03:50:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:47.929173 | orchestrator | 2026-01-03 03:50:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:47.929204 | orchestrator | 2026-01-03 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:50.973544 | orchestrator | 2026-01-03 03:50:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:50.974717 | orchestrator | 2026-01-03 03:50:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:50.974780 | orchestrator | 2026-01-03 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:54.037119 | orchestrator | 2026-01-03 03:50:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:54.038702 | orchestrator | 2026-01-03 03:50:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:54.038748 | orchestrator | 2026-01-03 03:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:57.082733 | orchestrator | 2026-01-03 03:50:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:50:57.084675 | orchestrator | 2026-01-03 03:50:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:50:57.084774 | orchestrator | 2026-01-03 03:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:00.131181 | orchestrator | 2026-01-03 03:51:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:00.133019 | orchestrator | 2026-01-03 03:51:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:00.133084 | orchestrator | 2026-01-03 03:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:03.176565 | orchestrator | 2026-01-03 03:51:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:03.178675 | orchestrator | 2026-01-03 03:51:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:03.178799 | orchestrator | 2026-01-03 03:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:06.230523 | orchestrator | 2026-01-03 03:51:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:06.231763 | orchestrator | 2026-01-03 03:51:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:06.232047 | orchestrator | 2026-01-03 03:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:09.275033 | orchestrator | 2026-01-03 03:51:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:09.276925 | orchestrator | 2026-01-03 03:51:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:09.277089 | orchestrator | 2026-01-03 03:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:12.319935 | orchestrator | 2026-01-03 03:51:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:12.322252 | orchestrator | 2026-01-03 03:51:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:12.322395 | orchestrator | 2026-01-03 03:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:15.368732 | orchestrator | 2026-01-03 03:51:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:15.370871 | orchestrator | 2026-01-03 03:51:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:15.371017 | orchestrator | 2026-01-03 03:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:18.415197 | orchestrator | 2026-01-03 03:51:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:18.416481 | orchestrator | 2026-01-03 03:51:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:18.416532 | orchestrator | 2026-01-03 03:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:21.464100 | orchestrator | 2026-01-03 03:51:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:21.465760 | orchestrator | 2026-01-03 03:51:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:21.465884 | orchestrator | 2026-01-03 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:24.512217 | orchestrator | 2026-01-03 03:51:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:24.513585 | orchestrator | 2026-01-03 03:51:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:24.513822 | orchestrator | 2026-01-03 03:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:27.557907 | orchestrator | 2026-01-03 03:51:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:27.558676 | orchestrator | 2026-01-03 03:51:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:27.558933 | orchestrator | 2026-01-03 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:30.603879 | orchestrator | 2026-01-03 03:51:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:30.606088 | orchestrator | 2026-01-03 03:51:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:30.606185 | orchestrator | 2026-01-03 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:33.654748 | orchestrator | 2026-01-03 03:51:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:33.656575 | orchestrator | 2026-01-03 03:51:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:33.656718 | orchestrator | 2026-01-03 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:36.702964 | orchestrator | 2026-01-03 03:51:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:36.705308 | orchestrator | 2026-01-03 03:51:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:36.705459 | orchestrator | 2026-01-03 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:39.750724 | orchestrator | 2026-01-03 03:51:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:39.752437 | orchestrator | 2026-01-03 03:51:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:39.752506 | orchestrator | 2026-01-03 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:42.801359 | orchestrator | 2026-01-03 03:51:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:42.803428 | orchestrator | 2026-01-03 03:51:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:42.803551 | orchestrator | 2026-01-03 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:45.845975 | orchestrator | 2026-01-03 03:51:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:45.847689 | orchestrator | 2026-01-03 03:51:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:45.847798 | orchestrator | 2026-01-03 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:48.897386 | orchestrator | 2026-01-03 03:51:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:48.898167 | orchestrator | 2026-01-03 03:51:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:48.898235 | orchestrator | 2026-01-03 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:51.946342 | orchestrator | 2026-01-03 03:51:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:51.947876 | orchestrator | 2026-01-03 03:51:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:51.948018 | orchestrator | 2026-01-03 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:54.982773 | orchestrator | 2026-01-03 03:51:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:54.983695 | orchestrator | 2026-01-03 03:51:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:54.983732 | orchestrator | 2026-01-03 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:58.025351 | orchestrator | 2026-01-03 03:51:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:51:58.026892 | orchestrator | 2026-01-03 03:51:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:51:58.027039 | orchestrator | 2026-01-03 03:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:01.066420 | orchestrator | 2026-01-03 03:52:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:01.068822 | orchestrator | 2026-01-03 03:52:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:01.069003 | orchestrator | 2026-01-03 03:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:04.116081 | orchestrator | 2026-01-03 03:52:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:04.117144 | orchestrator | 2026-01-03 03:52:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:04.117274 | orchestrator | 2026-01-03 03:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:07.168071 | orchestrator | 2026-01-03 03:52:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:07.170541 | orchestrator | 2026-01-03 03:52:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:07.170670 | orchestrator | 2026-01-03 03:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:10.217765 | orchestrator | 2026-01-03 03:52:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:10.221089 | orchestrator | 2026-01-03 03:52:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:10.221352 | orchestrator | 2026-01-03 03:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:13.269996 | orchestrator | 2026-01-03 03:52:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:13.273302 | orchestrator | 2026-01-03 03:52:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:13.273412 | orchestrator | 2026-01-03 03:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:16.320247 | orchestrator | 2026-01-03 03:52:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:16.323334 | orchestrator | 2026-01-03 03:52:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:16.323387 | orchestrator | 2026-01-03 03:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:19.370559 | orchestrator | 2026-01-03 03:52:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:19.373758 | orchestrator | 2026-01-03 03:52:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:19.373809 | orchestrator | 2026-01-03 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:22.417942 | orchestrator | 2026-01-03 03:52:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:22.419057 | orchestrator | 2026-01-03 03:52:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:22.419148 | orchestrator | 2026-01-03 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:25.462748 | orchestrator | 2026-01-03 03:52:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:25.465339 | orchestrator | 2026-01-03 03:52:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:25.465432 | orchestrator | 2026-01-03 03:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:28.510837 | orchestrator | 2026-01-03 03:52:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:28.512505 | orchestrator | 2026-01-03 03:52:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:28.512629 | orchestrator | 2026-01-03 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:31.555500 | orchestrator | 2026-01-03 03:52:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:31.557443 | orchestrator | 2026-01-03 03:52:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:31.557511 | orchestrator | 2026-01-03 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:34.599853 | orchestrator | 2026-01-03 03:52:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:34.600884 | orchestrator | 2026-01-03 03:52:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:34.600927 | orchestrator | 2026-01-03 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:37.650595 | orchestrator | 2026-01-03 03:52:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:37.652602 | orchestrator | 2026-01-03 03:52:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:37.652658 | orchestrator | 2026-01-03 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:40.696794 | orchestrator | 2026-01-03 03:52:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:40.698446 | orchestrator | 2026-01-03 03:52:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:40.698494 | orchestrator | 2026-01-03 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:43.748392 | orchestrator | 2026-01-03 03:52:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:43.750598 | orchestrator | 2026-01-03 03:52:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:43.750637 | orchestrator | 2026-01-03 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:46.789576 | orchestrator | 2026-01-03 03:52:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:46.791116 | orchestrator | 2026-01-03 03:52:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:46.791169 | orchestrator | 2026-01-03 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:49.835434 | orchestrator | 2026-01-03 03:52:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:49.836562 | orchestrator | 2026-01-03 03:52:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:49.836611 | orchestrator | 2026-01-03 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:52.887248 | orchestrator | 2026-01-03 03:52:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:52.889044 | orchestrator | 2026-01-03 03:52:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:52.889151 | orchestrator | 2026-01-03 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:55.935658 | orchestrator | 2026-01-03 03:52:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:55.938080 | orchestrator | 2026-01-03 03:52:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:55.938121 | orchestrator | 2026-01-03 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:58.985600 | orchestrator | 2026-01-03 03:52:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:52:58.987228 | orchestrator | 2026-01-03 03:52:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:52:58.987349 | orchestrator | 2026-01-03 03:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:02.034369 | orchestrator | 2026-01-03 03:53:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:02.036568 | orchestrator | 2026-01-03 03:53:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:02.036655 | orchestrator | 2026-01-03 03:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:05.078759 | orchestrator | 2026-01-03 03:53:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:05.080336 | orchestrator | 2026-01-03 03:53:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:05.080406 | orchestrator | 2026-01-03 03:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:08.125164 | orchestrator | 2026-01-03 03:53:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:08.126382 | orchestrator | 2026-01-03 03:53:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:08.126441 | orchestrator | 2026-01-03 03:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:11.171133 | orchestrator | 2026-01-03 03:53:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:11.173857 | orchestrator | 2026-01-03 03:53:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:11.173935 | orchestrator | 2026-01-03 03:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:14.224074 | orchestrator | 2026-01-03 03:53:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:14.228226 | orchestrator | 2026-01-03 03:53:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:14.228299 | orchestrator | 2026-01-03 03:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:17.271069 | orchestrator | 2026-01-03 03:53:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:17.272908 | orchestrator | 2026-01-03 03:53:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:17.273127 | orchestrator | 2026-01-03 03:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:20.317373 | orchestrator | 2026-01-03 03:53:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:20.317839 | orchestrator | 2026-01-03 03:53:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:20.317880 | orchestrator | 2026-01-03 03:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:23.363898 | orchestrator | 2026-01-03 03:53:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:23.365059 | orchestrator | 2026-01-03 03:53:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:23.365102 | orchestrator | 2026-01-03 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:26.407449 | orchestrator | 2026-01-03 03:53:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:26.409360 | orchestrator | 2026-01-03 03:53:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:26.409426 | orchestrator | 2026-01-03 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:29.450067 | orchestrator | 2026-01-03 03:53:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:29.452783 | orchestrator | 2026-01-03 03:53:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:29.452952 | orchestrator | 2026-01-03 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:32.498358 | orchestrator | 2026-01-03 03:53:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:32.500173 | orchestrator | 2026-01-03 03:53:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:32.500225 | orchestrator | 2026-01-03 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:35.545671 | orchestrator | 2026-01-03 03:53:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:35.546791 | orchestrator | 2026-01-03 03:53:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:35.546951 | orchestrator | 2026-01-03 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:38.602754 | orchestrator | 2026-01-03 03:53:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:38.604195 | orchestrator | 2026-01-03 03:53:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:38.604247 | orchestrator | 2026-01-03 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:41.652735 | orchestrator | 2026-01-03 03:53:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:41.653746 | orchestrator | 2026-01-03 03:53:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:41.653951 | orchestrator | 2026-01-03 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:44.704642 | orchestrator | 2026-01-03 03:53:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:44.707057 | orchestrator | 2026-01-03 03:53:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:44.707112 | orchestrator | 2026-01-03 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:47.756240 | orchestrator | 2026-01-03 03:53:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:47.757921 | orchestrator | 2026-01-03 03:53:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:47.758189 | orchestrator | 2026-01-03 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:50.809635 | orchestrator | 2026-01-03 03:53:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:50.811892 | orchestrator | 2026-01-03 03:53:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:50.811963 | orchestrator | 2026-01-03 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:53.861006 | orchestrator | 2026-01-03 03:53:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:53.861588 | orchestrator | 2026-01-03 03:53:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:53.861627 | orchestrator | 2026-01-03 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:56.907017 | orchestrator | 2026-01-03 03:53:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:56.908634 | orchestrator | 2026-01-03 03:53:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:56.908676 | orchestrator | 2026-01-03 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:59.951381 | orchestrator | 2026-01-03 03:53:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:53:59.953061 | orchestrator | 2026-01-03 03:53:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:53:59.953099 | orchestrator | 2026-01-03 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:02.999769 | orchestrator | 2026-01-03 03:54:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:03.001591 | orchestrator | 2026-01-03 03:54:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:03.001644 | orchestrator | 2026-01-03 03:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:06.045934 | orchestrator | 2026-01-03 03:54:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:06.047170 | orchestrator | 2026-01-03 03:54:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:06.047225 | orchestrator | 2026-01-03 03:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:09.093676 | orchestrator | 2026-01-03 03:54:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:09.094554 | orchestrator | 2026-01-03 03:54:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:09.094678 | orchestrator | 2026-01-03 03:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:12.137699 | orchestrator | 2026-01-03 03:54:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:12.139065 | orchestrator | 2026-01-03 03:54:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:12.139102 | orchestrator | 2026-01-03 03:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:15.184713 | orchestrator | 2026-01-03 03:54:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:15.187578 | orchestrator | 2026-01-03 03:54:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:15.187739 | orchestrator | 2026-01-03 03:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:18.232902 | orchestrator | 2026-01-03 03:54:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:18.234159 | orchestrator | 2026-01-03 03:54:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:18.234199 | orchestrator | 2026-01-03 03:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:21.278233 | orchestrator | 2026-01-03 03:54:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:21.279876 | orchestrator | 2026-01-03 03:54:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:21.279949 | orchestrator | 2026-01-03 03:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:24.325689 | orchestrator | 2026-01-03 03:54:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:24.327182 | orchestrator | 2026-01-03 03:54:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:24.327244 | orchestrator | 2026-01-03 03:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:27.375148 | orchestrator | 2026-01-03 03:54:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:27.377939 | orchestrator | 2026-01-03 03:54:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:27.378317 | orchestrator | 2026-01-03 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:30.418504 | orchestrator | 2026-01-03 03:54:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:30.420653 | orchestrator | 2026-01-03 03:54:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:30.420676 | orchestrator | 2026-01-03 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:33.464534 | orchestrator | 2026-01-03 03:54:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:33.466322 | orchestrator | 2026-01-03 03:54:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:33.466360 | orchestrator | 2026-01-03 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:36.509300 | orchestrator | 2026-01-03 03:54:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:36.510920 | orchestrator | 2026-01-03 03:54:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:36.510970 | orchestrator | 2026-01-03 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:39.559893 | orchestrator | 2026-01-03 03:54:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:39.562153 | orchestrator | 2026-01-03 03:54:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:39.562195 | orchestrator | 2026-01-03 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:42.602226 | orchestrator | 2026-01-03 03:54:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:42.603557 | orchestrator | 2026-01-03 03:54:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:42.603640 | orchestrator | 2026-01-03 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:45.649015 | orchestrator | 2026-01-03 03:54:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:45.650921 | orchestrator | 2026-01-03 03:54:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:45.650979 | orchestrator | 2026-01-03 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:48.694400 | orchestrator | 2026-01-03 03:54:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:48.696672 | orchestrator | 2026-01-03 03:54:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:48.696830 | orchestrator | 2026-01-03 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:51.744050 | orchestrator | 2026-01-03 03:54:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:51.745717 | orchestrator | 2026-01-03 03:54:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:51.745795 | orchestrator | 2026-01-03 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:54.791576 | orchestrator | 2026-01-03 03:54:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:54.793439 | orchestrator | 2026-01-03 03:54:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:54.793549 | orchestrator | 2026-01-03 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:57.834755 | orchestrator | 2026-01-03 03:54:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:54:57.836137 | orchestrator | 2026-01-03 03:54:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:54:57.836185 | orchestrator | 2026-01-03 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:00.873947 | orchestrator | 2026-01-03 03:55:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:00.876242 | orchestrator | 2026-01-03 03:55:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:00.876302 | orchestrator | 2026-01-03 03:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:03.925030 | orchestrator | 2026-01-03 03:55:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:03.926471 | orchestrator | 2026-01-03 03:55:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:03.926579 | orchestrator | 2026-01-03 03:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:06.972845 | orchestrator | 2026-01-03 03:55:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:06.975834 | orchestrator | 2026-01-03 03:55:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:06.975896 | orchestrator | 2026-01-03 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:10.023130 | orchestrator | 2026-01-03 03:55:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:10.024314 | orchestrator | 2026-01-03 03:55:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:10.024393 | orchestrator | 2026-01-03 03:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:13.066174 | orchestrator | 2026-01-03 03:55:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:13.068026 | orchestrator | 2026-01-03 03:55:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:13.068929 | orchestrator | 2026-01-03 03:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:16.111731 | orchestrator | 2026-01-03 03:55:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:16.113960 | orchestrator | 2026-01-03 03:55:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:16.114141 | orchestrator | 2026-01-03 03:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:19.159112 | orchestrator | 2026-01-03 03:55:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:19.160633 | orchestrator | 2026-01-03 03:55:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:19.160714 | orchestrator | 2026-01-03 03:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:22.204409 | orchestrator | 2026-01-03 03:55:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:22.206265 | orchestrator | 2026-01-03 03:55:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:22.206344 | orchestrator | 2026-01-03 03:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:25.247577 | orchestrator | 2026-01-03 03:55:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:25.251518 | orchestrator | 2026-01-03 03:55:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:25.251699 | orchestrator | 2026-01-03 03:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:28.293897 | orchestrator | 2026-01-03 03:55:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:28.295351 | orchestrator | 2026-01-03 03:55:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:28.295388 | orchestrator | 2026-01-03 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:31.339895 | orchestrator | 2026-01-03 03:55:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:31.341831 | orchestrator | 2026-01-03 03:55:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:31.341922 | orchestrator | 2026-01-03 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:34.384230 | orchestrator | 2026-01-03 03:55:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:34.386303 | orchestrator | 2026-01-03 03:55:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:34.386360 | orchestrator | 2026-01-03 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:37.428439 | orchestrator | 2026-01-03 03:55:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:37.429532 | orchestrator | 2026-01-03 03:55:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:37.429589 | orchestrator | 2026-01-03 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:40.481121 | orchestrator | 2026-01-03 03:55:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:40.482838 | orchestrator | 2026-01-03 03:55:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:40.483863 | orchestrator | 2026-01-03 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:43.523238 | orchestrator | 2026-01-03 03:55:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:43.525957 | orchestrator | 2026-01-03 03:55:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:43.526076 | orchestrator | 2026-01-03 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:46.572938 | orchestrator | 2026-01-03 03:55:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:46.574941 | orchestrator | 2026-01-03 03:55:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:46.574996 | orchestrator | 2026-01-03 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:49.619153 | orchestrator | 2026-01-03 03:55:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:49.620406 | orchestrator | 2026-01-03 03:55:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:49.620426 | orchestrator | 2026-01-03 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:52.660223 | orchestrator | 2026-01-03 03:55:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:52.661953 | orchestrator | 2026-01-03 03:55:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:52.661999 | orchestrator | 2026-01-03 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:55.699086 | orchestrator | 2026-01-03 03:55:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:55.699303 | orchestrator | 2026-01-03 03:55:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:55.699333 | orchestrator | 2026-01-03 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:58.744748 | orchestrator | 2026-01-03 03:55:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:55:58.746177 | orchestrator | 2026-01-03 03:55:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:55:58.746260 | orchestrator | 2026-01-03 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:01.789921 | orchestrator | 2026-01-03 03:56:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:01.792793 | orchestrator | 2026-01-03 03:56:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:01.792927 | orchestrator | 2026-01-03 03:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:04.838925 | orchestrator | 2026-01-03 03:56:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:04.840807 | orchestrator | 2026-01-03 03:56:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:04.840999 | orchestrator | 2026-01-03 03:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:07.892671 | orchestrator | 2026-01-03 03:56:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:07.896606 | orchestrator | 2026-01-03 03:56:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:07.896779 | orchestrator | 2026-01-03 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:10.943482 | orchestrator | 2026-01-03 03:56:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:10.946405 | orchestrator | 2026-01-03 03:56:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:10.946477 | orchestrator | 2026-01-03 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:13.994987 | orchestrator | 2026-01-03 03:56:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:13.997212 | orchestrator | 2026-01-03 03:56:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:13.997281 | orchestrator | 2026-01-03 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:17.043043 | orchestrator | 2026-01-03 03:56:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:17.045112 | orchestrator | 2026-01-03 03:56:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:17.045584 | orchestrator | 2026-01-03 03:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:20.086629 | orchestrator | 2026-01-03 03:56:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:20.089214 | orchestrator | 2026-01-03 03:56:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:20.089281 | orchestrator | 2026-01-03 03:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:23.135797 | orchestrator | 2026-01-03 03:56:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:23.137762 | orchestrator | 2026-01-03 03:56:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:23.137829 | orchestrator | 2026-01-03 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:26.183307 | orchestrator | 2026-01-03 03:56:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:26.184194 | orchestrator | 2026-01-03 03:56:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:26.184235 | orchestrator | 2026-01-03 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:29.230110 | orchestrator | 2026-01-03 03:56:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:29.231284 | orchestrator | 2026-01-03 03:56:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:29.231328 | orchestrator | 2026-01-03 03:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:32.276978 | orchestrator | 2026-01-03 03:56:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:32.278224 | orchestrator | 2026-01-03 03:56:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:32.278277 | orchestrator | 2026-01-03 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:35.321567 | orchestrator | 2026-01-03 03:56:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:35.323349 | orchestrator | 2026-01-03 03:56:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:35.323400 | orchestrator | 2026-01-03 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:38.367893 | orchestrator | 2026-01-03 03:56:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:38.368864 | orchestrator | 2026-01-03 03:56:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:38.368928 | orchestrator | 2026-01-03 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:41.417146 | orchestrator | 2026-01-03 03:56:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:41.418727 | orchestrator | 2026-01-03 03:56:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:41.418811 | orchestrator | 2026-01-03 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:44.463889 | orchestrator | 2026-01-03 03:56:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:44.465497 | orchestrator | 2026-01-03 03:56:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:44.465585 | orchestrator | 2026-01-03 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:47.507794 | orchestrator | 2026-01-03 03:56:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:47.508872 | orchestrator | 2026-01-03 03:56:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:47.508944 | orchestrator | 2026-01-03 03:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:50.550838 | orchestrator | 2026-01-03 03:56:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:50.552204 | orchestrator | 2026-01-03 03:56:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:50.552529 | orchestrator | 2026-01-03 03:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:53.601876 | orchestrator | 2026-01-03 03:56:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:53.602913 | orchestrator | 2026-01-03 03:56:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:53.603009 | orchestrator | 2026-01-03 03:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:56.653750 | orchestrator | 2026-01-03 03:56:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:56.655266 | orchestrator | 2026-01-03 03:56:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:56.655294 | orchestrator | 2026-01-03 03:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:59.702231 | orchestrator | 2026-01-03 03:56:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:56:59.703119 | orchestrator | 2026-01-03 03:56:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:56:59.703200 | orchestrator | 2026-01-03 03:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:02.744632 | orchestrator | 2026-01-03 03:57:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:02.745693 | orchestrator | 2026-01-03 03:57:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:02.745861 | orchestrator | 2026-01-03 03:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:05.791174 | orchestrator | 2026-01-03 03:57:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:05.792949 | orchestrator | 2026-01-03 03:57:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:05.793008 | orchestrator | 2026-01-03 03:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:08.836070 | orchestrator | 2026-01-03 03:57:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:08.837705 | orchestrator | 2026-01-03 03:57:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:08.837748 | orchestrator | 2026-01-03 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:11.881676 | orchestrator | 2026-01-03 03:57:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:11.884127 | orchestrator | 2026-01-03 03:57:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:11.884247 | orchestrator | 2026-01-03 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:14.927152 | orchestrator | 2026-01-03 03:57:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:14.928656 | orchestrator | 2026-01-03 03:57:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:14.929008 | orchestrator | 2026-01-03 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:17.969447 | orchestrator | 2026-01-03 03:57:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:17.970854 | orchestrator | 2026-01-03 03:57:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:17.970889 | orchestrator | 2026-01-03 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:21.016576 | orchestrator | 2026-01-03 03:57:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:21.018789 | orchestrator | 2026-01-03 03:57:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:21.018859 | orchestrator | 2026-01-03 03:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:24.069516 | orchestrator | 2026-01-03 03:57:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:24.071646 | orchestrator | 2026-01-03 03:57:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:24.071715 | orchestrator | 2026-01-03 03:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:27.116200 | orchestrator | 2026-01-03 03:57:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:27.118172 | orchestrator | 2026-01-03 03:57:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:27.118216 | orchestrator | 2026-01-03 03:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:30.168687 | orchestrator | 2026-01-03 03:57:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:30.170616 | orchestrator | 2026-01-03 03:57:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:30.170665 | orchestrator | 2026-01-03 03:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:33.215807 | orchestrator | 2026-01-03 03:57:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:33.217759 | orchestrator | 2026-01-03 03:57:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:33.218141 | orchestrator | 2026-01-03 03:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:36.261485 | orchestrator | 2026-01-03 03:57:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:36.262190 | orchestrator | 2026-01-03 03:57:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:36.262218 | orchestrator | 2026-01-03 03:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:39.308788 | orchestrator | 2026-01-03 03:57:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:39.311259 | orchestrator | 2026-01-03 03:57:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:39.311390 | orchestrator | 2026-01-03 03:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:42.355701 | orchestrator | 2026-01-03 03:57:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:42.357979 | orchestrator | 2026-01-03 03:57:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:42.358123 | orchestrator | 2026-01-03 03:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:45.406215 | orchestrator | 2026-01-03 03:57:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:45.407413 | orchestrator | 2026-01-03 03:57:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:45.407485 | orchestrator | 2026-01-03 03:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:48.456760 | orchestrator | 2026-01-03 03:57:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:48.458058 | orchestrator | 2026-01-03 03:57:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:48.458084 | orchestrator | 2026-01-03 03:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:51.502850 | orchestrator | 2026-01-03 03:57:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:51.503953 | orchestrator | 2026-01-03 03:57:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:51.504027 | orchestrator | 2026-01-03 03:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:54.551014 | orchestrator | 2026-01-03 03:57:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:54.552993 | orchestrator | 2026-01-03 03:57:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:54.553071 | orchestrator | 2026-01-03 03:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:57.597816 | orchestrator | 2026-01-03 03:57:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:57:57.600234 | orchestrator | 2026-01-03 03:57:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:57:57.600285 | orchestrator | 2026-01-03 03:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:00.638773 | orchestrator | 2026-01-03 03:58:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:00.640195 | orchestrator | 2026-01-03 03:58:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:00.640335 | orchestrator | 2026-01-03 03:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:03.685901 | orchestrator | 2026-01-03 03:58:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:03.687162 | orchestrator | 2026-01-03 03:58:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:03.687229 | orchestrator | 2026-01-03 03:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:06.734523 | orchestrator | 2026-01-03 03:58:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:06.734691 | orchestrator | 2026-01-03 03:58:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:06.734880 | orchestrator | 2026-01-03 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:09.777664 | orchestrator | 2026-01-03 03:58:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:09.778874 | orchestrator | 2026-01-03 03:58:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:09.778911 | orchestrator | 2026-01-03 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:12.825949 | orchestrator | 2026-01-03 03:58:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:12.827973 | orchestrator | 2026-01-03 03:58:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:12.828032 | orchestrator | 2026-01-03 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:15.873847 | orchestrator | 2026-01-03 03:58:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:15.876414 | orchestrator | 2026-01-03 03:58:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:15.876451 | orchestrator | 2026-01-03 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:18.922442 | orchestrator | 2026-01-03 03:58:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:18.924001 | orchestrator | 2026-01-03 03:58:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:18.924026 | orchestrator | 2026-01-03 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:21.969244 | orchestrator | 2026-01-03 03:58:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:21.972093 | orchestrator | 2026-01-03 03:58:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:21.972145 | orchestrator | 2026-01-03 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:25.010763 | orchestrator | 2026-01-03 03:58:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:25.013085 | orchestrator | 2026-01-03 03:58:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:25.013292 | orchestrator | 2026-01-03 03:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:28.051851 | orchestrator | 2026-01-03 03:58:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:28.053738 | orchestrator | 2026-01-03 03:58:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:28.053796 | orchestrator | 2026-01-03 03:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:31.103731 | orchestrator | 2026-01-03 03:58:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:31.105458 | orchestrator | 2026-01-03 03:58:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:31.105509 | orchestrator | 2026-01-03 03:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:34.148472 | orchestrator | 2026-01-03 03:58:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:34.150434 | orchestrator | 2026-01-03 03:58:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:34.150471 | orchestrator | 2026-01-03 03:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:37.198580 | orchestrator | 2026-01-03 03:58:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:37.200497 | orchestrator | 2026-01-03 03:58:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:37.200902 | orchestrator | 2026-01-03 03:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:40.245058 | orchestrator | 2026-01-03 03:58:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:40.246445 | orchestrator | 2026-01-03 03:58:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:40.246500 | orchestrator | 2026-01-03 03:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:43.290613 | orchestrator | 2026-01-03 03:58:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:43.291862 | orchestrator | 2026-01-03 03:58:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:43.291894 | orchestrator | 2026-01-03 03:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:46.339583 | orchestrator | 2026-01-03 03:58:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:46.341867 | orchestrator | 2026-01-03 03:58:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:46.342122 | orchestrator | 2026-01-03 03:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:49.388931 | orchestrator | 2026-01-03 03:58:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:49.390884 | orchestrator | 2026-01-03 03:58:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:49.390934 | orchestrator | 2026-01-03 03:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:52.437625 | orchestrator | 2026-01-03 03:58:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:52.439684 | orchestrator | 2026-01-03 03:58:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:52.439798 | orchestrator | 2026-01-03 03:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:55.489346 | orchestrator | 2026-01-03 03:58:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:55.491626 | orchestrator | 2026-01-03 03:58:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:55.491700 | orchestrator | 2026-01-03 03:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:58.535973 | orchestrator | 2026-01-03 03:58:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:58:58.537279 | orchestrator | 2026-01-03 03:58:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:58:58.537323 | orchestrator | 2026-01-03 03:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:01.585469 | orchestrator | 2026-01-03 03:59:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:01.588262 | orchestrator | 2026-01-03 03:59:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:01.588298 | orchestrator | 2026-01-03 03:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:04.632037 | orchestrator | 2026-01-03 03:59:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:04.633665 | orchestrator | 2026-01-03 03:59:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:04.633750 | orchestrator | 2026-01-03 03:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:07.677743 | orchestrator | 2026-01-03 03:59:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:07.679263 | orchestrator | 2026-01-03 03:59:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:07.679355 | orchestrator | 2026-01-03 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:10.720727 | orchestrator | 2026-01-03 03:59:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:10.721856 | orchestrator | 2026-01-03 03:59:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:10.721906 | orchestrator | 2026-01-03 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:13.770885 | orchestrator | 2026-01-03 03:59:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:13.772476 | orchestrator | 2026-01-03 03:59:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:13.772632 | orchestrator | 2026-01-03 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:16.821956 | orchestrator | 2026-01-03 03:59:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:16.823642 | orchestrator | 2026-01-03 03:59:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:16.823687 | orchestrator | 2026-01-03 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:19.870303 | orchestrator | 2026-01-03 03:59:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:19.871734 | orchestrator | 2026-01-03 03:59:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:19.871763 | orchestrator | 2026-01-03 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:22.916627 | orchestrator | 2026-01-03 03:59:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:22.917781 | orchestrator | 2026-01-03 03:59:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:22.917829 | orchestrator | 2026-01-03 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:25.967309 | orchestrator | 2026-01-03 03:59:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:25.969674 | orchestrator | 2026-01-03 03:59:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:25.970013 | orchestrator | 2026-01-03 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:29.023845 | orchestrator | 2026-01-03 03:59:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:29.025379 | orchestrator | 2026-01-03 03:59:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:29.025621 | orchestrator | 2026-01-03 03:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:32.072280 | orchestrator | 2026-01-03 03:59:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:32.074258 | orchestrator | 2026-01-03 03:59:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:32.074308 | orchestrator | 2026-01-03 03:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:35.122719 | orchestrator | 2026-01-03 03:59:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:35.124300 | orchestrator | 2026-01-03 03:59:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:35.124351 | orchestrator | 2026-01-03 03:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:38.171282 | orchestrator | 2026-01-03 03:59:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:38.172958 | orchestrator | 2026-01-03 03:59:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:38.172998 | orchestrator | 2026-01-03 03:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:41.223561 | orchestrator | 2026-01-03 03:59:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:41.225459 | orchestrator | 2026-01-03 03:59:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:41.225558 | orchestrator | 2026-01-03 03:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:44.271950 | orchestrator | 2026-01-03 03:59:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:44.272992 | orchestrator | 2026-01-03 03:59:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:44.273309 | orchestrator | 2026-01-03 03:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:47.317510 | orchestrator | 2026-01-03 03:59:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:47.319322 | orchestrator | 2026-01-03 03:59:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:47.319408 | orchestrator | 2026-01-03 03:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:50.364891 | orchestrator | 2026-01-03 03:59:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:50.366848 | orchestrator | 2026-01-03 03:59:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:50.367075 | orchestrator | 2026-01-03 03:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:53.409234 | orchestrator | 2026-01-03 03:59:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:53.409988 | orchestrator | 2026-01-03 03:59:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:53.410075 | orchestrator | 2026-01-03 03:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:56.448234 | orchestrator | 2026-01-03 03:59:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:56.450494 | orchestrator | 2026-01-03 03:59:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:56.450564 | orchestrator | 2026-01-03 03:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:59.493727 | orchestrator | 2026-01-03 03:59:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 03:59:59.495501 | orchestrator | 2026-01-03 03:59:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 03:59:59.495791 | orchestrator | 2026-01-03 03:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:02.537833 | orchestrator | 2026-01-03 04:00:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:02.539939 | orchestrator | 2026-01-03 04:00:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:02.540036 | orchestrator | 2026-01-03 04:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:05.588878 | orchestrator | 2026-01-03 04:00:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:05.590489 | orchestrator | 2026-01-03 04:00:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:05.590595 | orchestrator | 2026-01-03 04:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:08.634927 | orchestrator | 2026-01-03 04:00:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:08.636236 | orchestrator | 2026-01-03 04:00:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:08.636337 | orchestrator | 2026-01-03 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:11.678524 | orchestrator | 2026-01-03 04:00:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:11.680586 | orchestrator | 2026-01-03 04:00:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:11.680669 | orchestrator | 2026-01-03 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:14.731284 | orchestrator | 2026-01-03 04:00:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:14.733471 | orchestrator | 2026-01-03 04:00:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:14.734011 | orchestrator | 2026-01-03 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:17.779884 | orchestrator | 2026-01-03 04:00:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:17.782260 | orchestrator | 2026-01-03 04:00:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:17.782327 | orchestrator | 2026-01-03 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:20.826550 | orchestrator | 2026-01-03 04:00:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:20.828045 | orchestrator | 2026-01-03 04:00:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:20.828164 | orchestrator | 2026-01-03 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:23.880583 | orchestrator | 2026-01-03 04:00:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:23.883306 | orchestrator | 2026-01-03 04:00:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:23.883375 | orchestrator | 2026-01-03 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:26.931550 | orchestrator | 2026-01-03 04:00:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:26.933195 | orchestrator | 2026-01-03 04:00:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:26.933348 | orchestrator | 2026-01-03 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:29.974516 | orchestrator | 2026-01-03 04:00:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:29.977030 | orchestrator | 2026-01-03 04:00:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:29.977164 | orchestrator | 2026-01-03 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:33.028430 | orchestrator | 2026-01-03 04:00:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:33.029585 | orchestrator | 2026-01-03 04:00:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:33.029649 | orchestrator | 2026-01-03 04:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:36.077529 | orchestrator | 2026-01-03 04:00:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:36.081104 | orchestrator | 2026-01-03 04:00:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:36.081188 | orchestrator | 2026-01-03 04:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:39.124461 | orchestrator | 2026-01-03 04:00:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:39.125617 | orchestrator | 2026-01-03 04:00:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:39.125634 | orchestrator | 2026-01-03 04:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:42.170340 | orchestrator | 2026-01-03 04:00:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:42.172339 | orchestrator | 2026-01-03 04:00:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:42.172387 | orchestrator | 2026-01-03 04:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:45.213538 | orchestrator | 2026-01-03 04:00:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:45.215335 | orchestrator | 2026-01-03 04:00:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:45.215422 | orchestrator | 2026-01-03 04:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:48.260770 | orchestrator | 2026-01-03 04:00:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:48.262929 | orchestrator | 2026-01-03 04:00:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:48.263003 | orchestrator | 2026-01-03 04:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:51.308461 | orchestrator | 2026-01-03 04:00:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:51.310335 | orchestrator | 2026-01-03 04:00:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:51.310595 | orchestrator | 2026-01-03 04:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:54.348976 | orchestrator | 2026-01-03 04:00:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:54.349258 | orchestrator | 2026-01-03 04:00:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:54.349396 | orchestrator | 2026-01-03 04:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:57.395095 | orchestrator | 2026-01-03 04:00:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:00:57.397680 | orchestrator | 2026-01-03 04:00:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:00:57.397749 | orchestrator | 2026-01-03 04:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:00.440663 | orchestrator | 2026-01-03 04:01:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:00.441356 | orchestrator | 2026-01-03 04:01:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:00.441384 | orchestrator | 2026-01-03 04:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:03.493231 | orchestrator | 2026-01-03 04:01:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:03.494732 | orchestrator | 2026-01-03 04:01:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:03.494770 | orchestrator | 2026-01-03 04:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:06.540489 | orchestrator | 2026-01-03 04:01:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:06.542859 | orchestrator | 2026-01-03 04:01:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:06.542956 | orchestrator | 2026-01-03 04:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:09.593340 | orchestrator | 2026-01-03 04:01:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:09.595128 | orchestrator | 2026-01-03 04:01:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:09.595284 | orchestrator | 2026-01-03 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:12.638793 | orchestrator | 2026-01-03 04:01:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:12.639967 | orchestrator | 2026-01-03 04:01:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:12.640068 | orchestrator | 2026-01-03 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:15.692546 | orchestrator | 2026-01-03 04:01:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:15.694444 | orchestrator | 2026-01-03 04:01:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:15.694564 | orchestrator | 2026-01-03 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:18.741596 | orchestrator | 2026-01-03 04:01:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:18.743479 | orchestrator | 2026-01-03 04:01:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:18.743569 | orchestrator | 2026-01-03 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:21.790246 | orchestrator | 2026-01-03 04:01:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:21.791718 | orchestrator | 2026-01-03 04:01:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:21.791761 | orchestrator | 2026-01-03 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:24.843304 | orchestrator | 2026-01-03 04:01:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:24.845215 | orchestrator | 2026-01-03 04:01:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:24.845264 | orchestrator | 2026-01-03 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:27.890773 | orchestrator | 2026-01-03 04:01:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:27.892647 | orchestrator | 2026-01-03 04:01:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:27.892684 | orchestrator | 2026-01-03 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:30.935546 | orchestrator | 2026-01-03 04:01:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:30.936650 | orchestrator | 2026-01-03 04:01:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:30.936688 | orchestrator | 2026-01-03 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:33.982611 | orchestrator | 2026-01-03 04:01:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:33.983944 | orchestrator | 2026-01-03 04:01:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:33.984079 | orchestrator | 2026-01-03 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:37.026692 | orchestrator | 2026-01-03 04:01:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:37.027338 | orchestrator | 2026-01-03 04:01:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:37.027531 | orchestrator | 2026-01-03 04:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:40.066266 | orchestrator | 2026-01-03 04:01:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:40.068737 | orchestrator | 2026-01-03 04:01:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:40.068807 | orchestrator | 2026-01-03 04:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:43.113107 | orchestrator | 2026-01-03 04:01:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:43.114153 | orchestrator | 2026-01-03 04:01:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:43.114365 | orchestrator | 2026-01-03 04:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:46.156718 | orchestrator | 2026-01-03 04:01:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:46.158284 | orchestrator | 2026-01-03 04:01:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:46.158417 | orchestrator | 2026-01-03 04:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:49.205431 | orchestrator | 2026-01-03 04:01:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:49.207593 | orchestrator | 2026-01-03 04:01:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:49.207639 | orchestrator | 2026-01-03 04:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:52.248072 | orchestrator | 2026-01-03 04:01:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:52.249709 | orchestrator | 2026-01-03 04:01:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:52.249771 | orchestrator | 2026-01-03 04:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:55.290443 | orchestrator | 2026-01-03 04:01:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:55.291020 | orchestrator | 2026-01-03 04:01:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:55.291075 | orchestrator | 2026-01-03 04:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:58.331692 | orchestrator | 2026-01-03 04:01:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:01:58.333353 | orchestrator | 2026-01-03 04:01:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:01:58.333398 | orchestrator | 2026-01-03 04:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:01.377020 | orchestrator | 2026-01-03 04:02:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:01.378620 | orchestrator | 2026-01-03 04:02:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:01.378658 | orchestrator | 2026-01-03 04:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:04.425633 | orchestrator | 2026-01-03 04:02:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:04.429110 | orchestrator | 2026-01-03 04:02:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:04.429268 | orchestrator | 2026-01-03 04:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:07.473902 | orchestrator | 2026-01-03 04:02:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:07.475361 | orchestrator | 2026-01-03 04:02:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:07.475390 | orchestrator | 2026-01-03 04:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:10.521091 | orchestrator | 2026-01-03 04:02:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:10.524078 | orchestrator | 2026-01-03 04:02:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:10.524146 | orchestrator | 2026-01-03 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:13.572125 | orchestrator | 2026-01-03 04:02:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:13.573648 | orchestrator | 2026-01-03 04:02:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:13.573725 | orchestrator | 2026-01-03 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:16.619372 | orchestrator | 2026-01-03 04:02:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:16.621329 | orchestrator | 2026-01-03 04:02:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:16.621396 | orchestrator | 2026-01-03 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:19.671519 | orchestrator | 2026-01-03 04:02:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:19.673529 | orchestrator | 2026-01-03 04:02:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:19.673576 | orchestrator | 2026-01-03 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:22.719997 | orchestrator | 2026-01-03 04:02:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:22.722292 | orchestrator | 2026-01-03 04:02:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:22.722361 | orchestrator | 2026-01-03 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:25.767281 | orchestrator | 2026-01-03 04:02:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:25.769085 | orchestrator | 2026-01-03 04:02:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:25.769135 | orchestrator | 2026-01-03 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:28.820131 | orchestrator | 2026-01-03 04:02:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:28.821895 | orchestrator | 2026-01-03 04:02:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:28.822168 | orchestrator | 2026-01-03 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:31.869472 | orchestrator | 2026-01-03 04:02:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:31.871106 | orchestrator | 2026-01-03 04:02:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:31.871178 | orchestrator | 2026-01-03 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:34.922894 | orchestrator | 2026-01-03 04:02:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:34.928703 | orchestrator | 2026-01-03 04:02:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:34.928785 | orchestrator | 2026-01-03 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:37.979089 | orchestrator | 2026-01-03 04:02:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:37.980705 | orchestrator | 2026-01-03 04:02:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:37.980741 | orchestrator | 2026-01-03 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:41.029962 | orchestrator | 2026-01-03 04:02:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:41.033009 | orchestrator | 2026-01-03 04:02:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:41.033050 | orchestrator | 2026-01-03 04:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:44.086346 | orchestrator | 2026-01-03 04:02:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:44.087874 | orchestrator | 2026-01-03 04:02:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:44.087961 | orchestrator | 2026-01-03 04:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:47.136408 | orchestrator | 2026-01-03 04:02:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:47.139552 | orchestrator | 2026-01-03 04:02:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:47.139609 | orchestrator | 2026-01-03 04:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:50.196404 | orchestrator | 2026-01-03 04:02:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:50.199245 | orchestrator | 2026-01-03 04:02:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:50.199327 | orchestrator | 2026-01-03 04:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:53.253383 | orchestrator | 2026-01-03 04:02:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:53.258319 | orchestrator | 2026-01-03 04:02:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:53.258407 | orchestrator | 2026-01-03 04:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:56.302877 | orchestrator | 2026-01-03 04:02:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:56.304314 | orchestrator | 2026-01-03 04:02:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:56.304357 | orchestrator | 2026-01-03 04:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:59.348305 | orchestrator | 2026-01-03 04:02:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:02:59.350944 | orchestrator | 2026-01-03 04:02:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:02:59.350997 | orchestrator | 2026-01-03 04:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:02.398358 | orchestrator | 2026-01-03 04:03:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:02.401155 | orchestrator | 2026-01-03 04:03:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:02.401352 | orchestrator | 2026-01-03 04:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:05.450347 | orchestrator | 2026-01-03 04:03:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:05.452023 | orchestrator | 2026-01-03 04:03:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:05.452117 | orchestrator | 2026-01-03 04:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:08.502377 | orchestrator | 2026-01-03 04:03:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:08.504208 | orchestrator | 2026-01-03 04:03:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:08.504262 | orchestrator | 2026-01-03 04:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:11.549617 | orchestrator | 2026-01-03 04:03:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:11.551854 | orchestrator | 2026-01-03 04:03:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:11.551973 | orchestrator | 2026-01-03 04:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:14.593666 | orchestrator | 2026-01-03 04:03:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:14.595438 | orchestrator | 2026-01-03 04:03:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:14.595594 | orchestrator | 2026-01-03 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:17.639802 | orchestrator | 2026-01-03 04:03:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:17.641309 | orchestrator | 2026-01-03 04:03:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:17.641375 | orchestrator | 2026-01-03 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:20.687093 | orchestrator | 2026-01-03 04:03:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:20.689259 | orchestrator | 2026-01-03 04:03:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:20.689496 | orchestrator | 2026-01-03 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:23.733650 | orchestrator | 2026-01-03 04:03:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:23.736224 | orchestrator | 2026-01-03 04:03:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:23.736304 | orchestrator | 2026-01-03 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:26.774888 | orchestrator | 2026-01-03 04:03:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:26.775565 | orchestrator | 2026-01-03 04:03:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:26.775600 | orchestrator | 2026-01-03 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:29.819865 | orchestrator | 2026-01-03 04:03:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:29.821757 | orchestrator | 2026-01-03 04:03:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:29.821858 | orchestrator | 2026-01-03 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:32.867991 | orchestrator | 2026-01-03 04:03:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:32.869818 | orchestrator | 2026-01-03 04:03:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:32.869933 | orchestrator | 2026-01-03 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:35.920240 | orchestrator | 2026-01-03 04:03:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:35.921775 | orchestrator | 2026-01-03 04:03:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:35.921810 | orchestrator | 2026-01-03 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:38.972303 | orchestrator | 2026-01-03 04:03:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:38.974318 | orchestrator | 2026-01-03 04:03:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:38.974410 | orchestrator | 2026-01-03 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:42.019664 | orchestrator | 2026-01-03 04:03:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:42.022443 | orchestrator | 2026-01-03 04:03:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:42.022525 | orchestrator | 2026-01-03 04:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:45.066606 | orchestrator | 2026-01-03 04:03:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:45.068626 | orchestrator | 2026-01-03 04:03:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:45.068760 | orchestrator | 2026-01-03 04:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:48.117613 | orchestrator | 2026-01-03 04:03:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:48.119219 | orchestrator | 2026-01-03 04:03:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:48.119268 | orchestrator | 2026-01-03 04:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:51.166256 | orchestrator | 2026-01-03 04:03:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:51.167552 | orchestrator | 2026-01-03 04:03:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:51.167605 | orchestrator | 2026-01-03 04:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:54.212366 | orchestrator | 2026-01-03 04:03:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:54.213844 | orchestrator | 2026-01-03 04:03:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:54.213978 | orchestrator | 2026-01-03 04:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:57.254490 | orchestrator | 2026-01-03 04:03:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:03:57.255320 | orchestrator | 2026-01-03 04:03:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:03:57.255412 | orchestrator | 2026-01-03 04:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:00.295213 | orchestrator | 2026-01-03 04:04:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:00.296823 | orchestrator | 2026-01-03 04:04:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:00.296920 | orchestrator | 2026-01-03 04:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:03.337528 | orchestrator | 2026-01-03 04:04:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:03.339109 | orchestrator | 2026-01-03 04:04:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:03.339188 | orchestrator | 2026-01-03 04:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:06.391549 | orchestrator | 2026-01-03 04:04:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:06.393100 | orchestrator | 2026-01-03 04:04:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:06.393270 | orchestrator | 2026-01-03 04:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:09.439046 | orchestrator | 2026-01-03 04:04:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:09.440965 | orchestrator | 2026-01-03 04:04:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:09.441394 | orchestrator | 2026-01-03 04:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:12.489741 | orchestrator | 2026-01-03 04:04:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:12.491923 | orchestrator | 2026-01-03 04:04:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:12.723513 | orchestrator | 2026-01-03 04:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:15.541422 | orchestrator | 2026-01-03 04:04:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:15.543329 | orchestrator | 2026-01-03 04:04:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:15.543430 | orchestrator | 2026-01-03 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:18.594737 | orchestrator | 2026-01-03 04:04:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:18.597268 | orchestrator | 2026-01-03 04:04:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:18.597468 | orchestrator | 2026-01-03 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:21.644311 | orchestrator | 2026-01-03 04:04:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:21.645623 | orchestrator | 2026-01-03 04:04:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:21.645655 | orchestrator | 2026-01-03 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:24.697189 | orchestrator | 2026-01-03 04:04:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:24.699020 | orchestrator | 2026-01-03 04:04:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:24.699107 | orchestrator | 2026-01-03 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:27.745514 | orchestrator | 2026-01-03 04:04:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:27.747907 | orchestrator | 2026-01-03 04:04:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:27.747974 | orchestrator | 2026-01-03 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:30.802196 | orchestrator | 2026-01-03 04:04:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:30.803463 | orchestrator | 2026-01-03 04:04:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:30.803529 | orchestrator | 2026-01-03 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:33.854617 | orchestrator | 2026-01-03 04:04:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:33.856467 | orchestrator | 2026-01-03 04:04:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:33.856506 | orchestrator | 2026-01-03 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:36.915263 | orchestrator | 2026-01-03 04:04:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:36.917035 | orchestrator | 2026-01-03 04:04:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:36.917397 | orchestrator | 2026-01-03 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:39.969704 | orchestrator | 2026-01-03 04:04:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:39.971799 | orchestrator | 2026-01-03 04:04:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:39.972039 | orchestrator | 2026-01-03 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:43.043895 | orchestrator | 2026-01-03 04:04:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:43.043982 | orchestrator | 2026-01-03 04:04:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:43.043992 | orchestrator | 2026-01-03 04:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:46.081870 | orchestrator | 2026-01-03 04:04:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:46.084090 | orchestrator | 2026-01-03 04:04:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:46.231807 | orchestrator | 2026-01-03 04:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:49.135716 | orchestrator | 2026-01-03 04:04:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:49.136357 | orchestrator | 2026-01-03 04:04:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:49.136429 | orchestrator | 2026-01-03 04:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:52.191340 | orchestrator | 2026-01-03 04:04:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:52.193558 | orchestrator | 2026-01-03 04:04:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:52.193670 | orchestrator | 2026-01-03 04:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:55.249671 | orchestrator | 2026-01-03 04:04:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:55.255956 | orchestrator | 2026-01-03 04:04:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:55.257190 | orchestrator | 2026-01-03 04:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:58.302909 | orchestrator | 2026-01-03 04:04:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:04:58.305701 | orchestrator | 2026-01-03 04:04:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:04:58.305779 | orchestrator | 2026-01-03 04:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:01.344366 | orchestrator | 2026-01-03 04:05:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:01.345613 | orchestrator | 2026-01-03 04:05:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:01.345907 | orchestrator | 2026-01-03 04:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:04.390985 | orchestrator | 2026-01-03 04:05:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:04.392144 | orchestrator | 2026-01-03 04:05:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:04.392190 | orchestrator | 2026-01-03 04:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:07.439206 | orchestrator | 2026-01-03 04:05:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:07.440818 | orchestrator | 2026-01-03 04:05:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:07.440881 | orchestrator | 2026-01-03 04:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:10.496579 | orchestrator | 2026-01-03 04:05:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:10.498510 | orchestrator | 2026-01-03 04:05:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:10.498643 | orchestrator | 2026-01-03 04:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:13.545761 | orchestrator | 2026-01-03 04:05:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:13.547534 | orchestrator | 2026-01-03 04:05:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:13.547691 | orchestrator | 2026-01-03 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:16.594302 | orchestrator | 2026-01-03 04:05:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:16.596910 | orchestrator | 2026-01-03 04:05:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:16.596959 | orchestrator | 2026-01-03 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:19.649872 | orchestrator | 2026-01-03 04:05:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:19.652187 | orchestrator | 2026-01-03 04:05:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:19.652299 | orchestrator | 2026-01-03 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:22.701351 | orchestrator | 2026-01-03 04:05:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:22.702559 | orchestrator | 2026-01-03 04:05:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:22.702605 | orchestrator | 2026-01-03 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:25.749636 | orchestrator | 2026-01-03 04:05:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:25.751982 | orchestrator | 2026-01-03 04:05:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:25.752285 | orchestrator | 2026-01-03 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:28.802242 | orchestrator | 2026-01-03 04:05:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:28.804491 | orchestrator | 2026-01-03 04:05:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:28.804534 | orchestrator | 2026-01-03 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:31.857924 | orchestrator | 2026-01-03 04:05:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:31.860260 | orchestrator | 2026-01-03 04:05:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:31.860666 | orchestrator | 2026-01-03 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:34.902637 | orchestrator | 2026-01-03 04:05:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:34.904297 | orchestrator | 2026-01-03 04:05:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:34.904352 | orchestrator | 2026-01-03 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:37.952308 | orchestrator | 2026-01-03 04:05:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:37.954086 | orchestrator | 2026-01-03 04:05:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:37.954110 | orchestrator | 2026-01-03 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:41.006187 | orchestrator | 2026-01-03 04:05:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:41.007845 | orchestrator | 2026-01-03 04:05:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:41.007900 | orchestrator | 2026-01-03 04:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:44.067600 | orchestrator | 2026-01-03 04:05:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:44.068642 | orchestrator | 2026-01-03 04:05:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:44.068723 | orchestrator | 2026-01-03 04:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:47.117989 | orchestrator | 2026-01-03 04:05:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:47.120122 | orchestrator | 2026-01-03 04:05:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:47.120224 | orchestrator | 2026-01-03 04:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:50.171857 | orchestrator | 2026-01-03 04:05:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:50.174179 | orchestrator | 2026-01-03 04:05:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:50.174444 | orchestrator | 2026-01-03 04:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:53.221562 | orchestrator | 2026-01-03 04:05:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:53.223670 | orchestrator | 2026-01-03 04:05:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:53.223856 | orchestrator | 2026-01-03 04:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:56.265212 | orchestrator | 2026-01-03 04:05:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:56.266507 | orchestrator | 2026-01-03 04:05:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:56.266582 | orchestrator | 2026-01-03 04:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:59.310218 | orchestrator | 2026-01-03 04:05:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:05:59.311431 | orchestrator | 2026-01-03 04:05:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:05:59.311496 | orchestrator | 2026-01-03 04:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:02.354349 | orchestrator | 2026-01-03 04:06:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:02.355605 | orchestrator | 2026-01-03 04:06:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:02.355656 | orchestrator | 2026-01-03 04:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:05.404887 | orchestrator | 2026-01-03 04:06:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:05.407203 | orchestrator | 2026-01-03 04:06:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:05.407399 | orchestrator | 2026-01-03 04:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:08.452671 | orchestrator | 2026-01-03 04:06:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:08.454962 | orchestrator | 2026-01-03 04:06:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:08.455018 | orchestrator | 2026-01-03 04:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:11.505973 | orchestrator | 2026-01-03 04:06:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:11.507507 | orchestrator | 2026-01-03 04:06:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:11.507529 | orchestrator | 2026-01-03 04:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:14.560394 | orchestrator | 2026-01-03 04:06:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:14.561867 | orchestrator | 2026-01-03 04:06:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:14.561908 | orchestrator | 2026-01-03 04:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:17.605790 | orchestrator | 2026-01-03 04:06:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:17.606871 | orchestrator | 2026-01-03 04:06:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:17.606927 | orchestrator | 2026-01-03 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:20.650641 | orchestrator | 2026-01-03 04:06:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:20.653441 | orchestrator | 2026-01-03 04:06:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:20.653502 | orchestrator | 2026-01-03 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:23.699026 | orchestrator | 2026-01-03 04:06:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:23.700440 | orchestrator | 2026-01-03 04:06:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:23.700555 | orchestrator | 2026-01-03 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:26.748897 | orchestrator | 2026-01-03 04:06:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:26.750095 | orchestrator | 2026-01-03 04:06:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:26.750148 | orchestrator | 2026-01-03 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:29.800910 | orchestrator | 2026-01-03 04:06:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:29.802786 | orchestrator | 2026-01-03 04:06:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:29.802873 | orchestrator | 2026-01-03 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:32.848394 | orchestrator | 2026-01-03 04:06:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:32.850093 | orchestrator | 2026-01-03 04:06:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:32.850162 | orchestrator | 2026-01-03 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:35.898636 | orchestrator | 2026-01-03 04:06:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:35.900635 | orchestrator | 2026-01-03 04:06:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:35.900713 | orchestrator | 2026-01-03 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:38.949800 | orchestrator | 2026-01-03 04:06:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:38.951981 | orchestrator | 2026-01-03 04:06:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:38.952027 | orchestrator | 2026-01-03 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:41.997927 | orchestrator | 2026-01-03 04:06:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:42.000669 | orchestrator | 2026-01-03 04:06:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:42.000824 | orchestrator | 2026-01-03 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:45.059164 | orchestrator | 2026-01-03 04:06:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:45.060906 | orchestrator | 2026-01-03 04:06:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:45.060933 | orchestrator | 2026-01-03 04:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:48.106106 | orchestrator | 2026-01-03 04:06:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:48.107732 | orchestrator | 2026-01-03 04:06:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:48.107797 | orchestrator | 2026-01-03 04:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:51.156483 | orchestrator | 2026-01-03 04:06:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:51.158896 | orchestrator | 2026-01-03 04:06:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:51.158980 | orchestrator | 2026-01-03 04:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:54.203554 | orchestrator | 2026-01-03 04:06:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:54.206097 | orchestrator | 2026-01-03 04:06:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:54.206201 | orchestrator | 2026-01-03 04:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:57.247917 | orchestrator | 2026-01-03 04:06:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:06:57.249331 | orchestrator | 2026-01-03 04:06:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:06:57.249369 | orchestrator | 2026-01-03 04:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:00.293284 | orchestrator | 2026-01-03 04:07:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:00.294585 | orchestrator | 2026-01-03 04:07:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:00.294611 | orchestrator | 2026-01-03 04:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:03.345247 | orchestrator | 2026-01-03 04:07:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:03.346818 | orchestrator | 2026-01-03 04:07:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:03.346955 | orchestrator | 2026-01-03 04:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:06.393344 | orchestrator | 2026-01-03 04:07:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:06.395162 | orchestrator | 2026-01-03 04:07:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:06.395321 | orchestrator | 2026-01-03 04:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:09.443119 | orchestrator | 2026-01-03 04:07:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:09.444183 | orchestrator | 2026-01-03 04:07:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:09.444302 | orchestrator | 2026-01-03 04:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:12.492839 | orchestrator | 2026-01-03 04:07:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:12.493883 | orchestrator | 2026-01-03 04:07:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:12.494197 | orchestrator | 2026-01-03 04:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:15.542091 | orchestrator | 2026-01-03 04:07:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:15.543023 | orchestrator | 2026-01-03 04:07:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:15.543055 | orchestrator | 2026-01-03 04:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:18.589930 | orchestrator | 2026-01-03 04:07:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:18.591768 | orchestrator | 2026-01-03 04:07:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:18.591812 | orchestrator | 2026-01-03 04:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:21.637825 | orchestrator | 2026-01-03 04:07:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:21.639118 | orchestrator | 2026-01-03 04:07:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:21.639197 | orchestrator | 2026-01-03 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:24.686429 | orchestrator | 2026-01-03 04:07:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:24.688611 | orchestrator | 2026-01-03 04:07:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:24.688677 | orchestrator | 2026-01-03 04:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:27.728072 | orchestrator | 2026-01-03 04:07:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:27.729692 | orchestrator | 2026-01-03 04:07:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:27.729873 | orchestrator | 2026-01-03 04:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:30.783788 | orchestrator | 2026-01-03 04:07:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:30.785280 | orchestrator | 2026-01-03 04:07:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:30.785383 | orchestrator | 2026-01-03 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:33.829707 | orchestrator | 2026-01-03 04:07:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:33.831558 | orchestrator | 2026-01-03 04:07:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:33.831604 | orchestrator | 2026-01-03 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:36.881619 | orchestrator | 2026-01-03 04:07:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:36.883936 | orchestrator | 2026-01-03 04:07:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:36.883990 | orchestrator | 2026-01-03 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:39.933300 | orchestrator | 2026-01-03 04:07:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:39.935021 | orchestrator | 2026-01-03 04:07:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:39.935337 | orchestrator | 2026-01-03 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:42.981142 | orchestrator | 2026-01-03 04:07:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:42.982538 | orchestrator | 2026-01-03 04:07:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:42.982581 | orchestrator | 2026-01-03 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:46.029810 | orchestrator | 2026-01-03 04:07:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:46.031748 | orchestrator | 2026-01-03 04:07:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:46.031851 | orchestrator | 2026-01-03 04:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:49.074173 | orchestrator | 2026-01-03 04:07:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:49.075271 | orchestrator | 2026-01-03 04:07:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:49.075349 | orchestrator | 2026-01-03 04:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:52.121004 | orchestrator | 2026-01-03 04:07:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:52.122568 | orchestrator | 2026-01-03 04:07:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:52.122667 | orchestrator | 2026-01-03 04:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:55.161641 | orchestrator | 2026-01-03 04:07:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:55.163164 | orchestrator | 2026-01-03 04:07:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:55.163222 | orchestrator | 2026-01-03 04:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:58.206530 | orchestrator | 2026-01-03 04:07:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:07:58.208137 | orchestrator | 2026-01-03 04:07:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:07:58.208222 | orchestrator | 2026-01-03 04:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:01.252449 | orchestrator | 2026-01-03 04:08:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:01.254447 | orchestrator | 2026-01-03 04:08:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:01.254523 | orchestrator | 2026-01-03 04:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:04.295458 | orchestrator | 2026-01-03 04:08:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:04.297341 | orchestrator | 2026-01-03 04:08:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:04.297628 | orchestrator | 2026-01-03 04:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:07.349112 | orchestrator | 2026-01-03 04:08:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:07.352021 | orchestrator | 2026-01-03 04:08:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:07.352207 | orchestrator | 2026-01-03 04:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:10.398103 | orchestrator | 2026-01-03 04:08:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:10.401838 | orchestrator | 2026-01-03 04:08:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:10.401931 | orchestrator | 2026-01-03 04:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:13.457244 | orchestrator | 2026-01-03 04:08:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:13.459679 | orchestrator | 2026-01-03 04:08:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:13.459750 | orchestrator | 2026-01-03 04:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:16.507568 | orchestrator | 2026-01-03 04:08:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:16.509167 | orchestrator | 2026-01-03 04:08:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:16.509753 | orchestrator | 2026-01-03 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:19.558662 | orchestrator | 2026-01-03 04:08:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:19.560535 | orchestrator | 2026-01-03 04:08:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:19.560603 | orchestrator | 2026-01-03 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:22.612175 | orchestrator | 2026-01-03 04:08:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:22.614656 | orchestrator | 2026-01-03 04:08:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:22.614845 | orchestrator | 2026-01-03 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:25.665649 | orchestrator | 2026-01-03 04:08:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:25.666922 | orchestrator | 2026-01-03 04:08:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:25.666969 | orchestrator | 2026-01-03 04:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:28.713196 | orchestrator | 2026-01-03 04:08:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:28.715852 | orchestrator | 2026-01-03 04:08:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:28.715932 | orchestrator | 2026-01-03 04:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:31.768070 | orchestrator | 2026-01-03 04:08:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:31.769557 | orchestrator | 2026-01-03 04:08:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:31.769820 | orchestrator | 2026-01-03 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:34.824363 | orchestrator | 2026-01-03 04:08:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:34.825997 | orchestrator | 2026-01-03 04:08:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:34.826085 | orchestrator | 2026-01-03 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:37.871126 | orchestrator | 2026-01-03 04:08:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:37.872537 | orchestrator | 2026-01-03 04:08:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:37.872659 | orchestrator | 2026-01-03 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:40.922642 | orchestrator | 2026-01-03 04:08:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:40.924989 | orchestrator | 2026-01-03 04:08:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:40.925049 | orchestrator | 2026-01-03 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:43.971323 | orchestrator | 2026-01-03 04:08:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:43.973044 | orchestrator | 2026-01-03 04:08:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:43.973134 | orchestrator | 2026-01-03 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:47.019014 | orchestrator | 2026-01-03 04:08:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:47.020393 | orchestrator | 2026-01-03 04:08:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:47.020752 | orchestrator | 2026-01-03 04:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:50.066482 | orchestrator | 2026-01-03 04:08:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:50.068713 | orchestrator | 2026-01-03 04:08:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:50.068832 | orchestrator | 2026-01-03 04:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:53.116748 | orchestrator | 2026-01-03 04:08:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:53.117989 | orchestrator | 2026-01-03 04:08:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:53.118054 | orchestrator | 2026-01-03 04:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:56.165261 | orchestrator | 2026-01-03 04:08:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:56.167655 | orchestrator | 2026-01-03 04:08:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:56.167802 | orchestrator | 2026-01-03 04:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:59.210520 | orchestrator | 2026-01-03 04:08:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:08:59.211662 | orchestrator | 2026-01-03 04:08:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:08:59.211787 | orchestrator | 2026-01-03 04:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:02.256508 | orchestrator | 2026-01-03 04:09:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:02.257765 | orchestrator | 2026-01-03 04:09:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:02.257861 | orchestrator | 2026-01-03 04:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:05.312924 | orchestrator | 2026-01-03 04:09:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:05.315019 | orchestrator | 2026-01-03 04:09:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:05.315072 | orchestrator | 2026-01-03 04:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:08.364756 | orchestrator | 2026-01-03 04:09:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:08.366119 | orchestrator | 2026-01-03 04:09:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:08.366177 | orchestrator | 2026-01-03 04:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:11.412858 | orchestrator | 2026-01-03 04:09:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:11.414629 | orchestrator | 2026-01-03 04:09:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:11.414769 | orchestrator | 2026-01-03 04:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:14.465797 | orchestrator | 2026-01-03 04:09:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:14.468408 | orchestrator | 2026-01-03 04:09:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:14.468491 | orchestrator | 2026-01-03 04:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:17.517237 | orchestrator | 2026-01-03 04:09:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:17.519774 | orchestrator | 2026-01-03 04:09:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:17.519861 | orchestrator | 2026-01-03 04:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:20.569316 | orchestrator | 2026-01-03 04:09:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:20.571019 | orchestrator | 2026-01-03 04:09:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:20.571074 | orchestrator | 2026-01-03 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:23.619010 | orchestrator | 2026-01-03 04:09:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:23.621210 | orchestrator | 2026-01-03 04:09:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:23.621269 | orchestrator | 2026-01-03 04:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:26.663839 | orchestrator | 2026-01-03 04:09:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:26.664887 | orchestrator | 2026-01-03 04:09:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:26.664958 | orchestrator | 2026-01-03 04:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:29.711959 | orchestrator | 2026-01-03 04:09:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:29.714577 | orchestrator | 2026-01-03 04:09:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:29.714660 | orchestrator | 2026-01-03 04:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:32.753188 | orchestrator | 2026-01-03 04:09:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:32.754695 | orchestrator | 2026-01-03 04:09:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:32.754753 | orchestrator | 2026-01-03 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:35.804006 | orchestrator | 2026-01-03 04:09:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:35.805485 | orchestrator | 2026-01-03 04:09:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:35.805572 | orchestrator | 2026-01-03 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:38.848513 | orchestrator | 2026-01-03 04:09:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:38.849958 | orchestrator | 2026-01-03 04:09:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:38.850002 | orchestrator | 2026-01-03 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:41.895261 | orchestrator | 2026-01-03 04:09:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:41.896590 | orchestrator | 2026-01-03 04:09:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:41.896771 | orchestrator | 2026-01-03 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:44.944506 | orchestrator | 2026-01-03 04:09:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:44.945773 | orchestrator | 2026-01-03 04:09:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:44.945811 | orchestrator | 2026-01-03 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:47.993783 | orchestrator | 2026-01-03 04:09:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:47.995638 | orchestrator | 2026-01-03 04:09:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:47.995741 | orchestrator | 2026-01-03 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:51.048771 | orchestrator | 2026-01-03 04:09:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:51.051124 | orchestrator | 2026-01-03 04:09:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:51.051171 | orchestrator | 2026-01-03 04:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:54.089553 | orchestrator | 2026-01-03 04:09:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:54.091774 | orchestrator | 2026-01-03 04:09:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:54.092127 | orchestrator | 2026-01-03 04:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:57.138305 | orchestrator | 2026-01-03 04:09:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:09:57.139743 | orchestrator | 2026-01-03 04:09:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:09:57.139869 | orchestrator | 2026-01-03 04:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:00.185485 | orchestrator | 2026-01-03 04:10:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:00.187111 | orchestrator | 2026-01-03 04:10:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:00.187243 | orchestrator | 2026-01-03 04:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:03.230714 | orchestrator | 2026-01-03 04:10:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:03.232324 | orchestrator | 2026-01-03 04:10:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:03.232406 | orchestrator | 2026-01-03 04:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:06.279556 | orchestrator | 2026-01-03 04:10:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:06.282133 | orchestrator | 2026-01-03 04:10:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:06.282175 | orchestrator | 2026-01-03 04:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:09.332088 | orchestrator | 2026-01-03 04:10:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:09.333835 | orchestrator | 2026-01-03 04:10:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:09.333935 | orchestrator | 2026-01-03 04:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:12.376382 | orchestrator | 2026-01-03 04:10:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:12.378083 | orchestrator | 2026-01-03 04:10:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:12.378158 | orchestrator | 2026-01-03 04:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:15.425739 | orchestrator | 2026-01-03 04:10:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:15.427483 | orchestrator | 2026-01-03 04:10:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:15.427572 | orchestrator | 2026-01-03 04:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:18.478880 | orchestrator | 2026-01-03 04:10:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:18.480471 | orchestrator | 2026-01-03 04:10:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:18.480510 | orchestrator | 2026-01-03 04:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:21.527950 | orchestrator | 2026-01-03 04:10:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:21.529376 | orchestrator | 2026-01-03 04:10:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:21.529427 | orchestrator | 2026-01-03 04:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:24.577834 | orchestrator | 2026-01-03 04:10:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:24.579862 | orchestrator | 2026-01-03 04:10:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:24.579897 | orchestrator | 2026-01-03 04:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:27.626226 | orchestrator | 2026-01-03 04:10:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:27.627300 | orchestrator | 2026-01-03 04:10:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:27.627365 | orchestrator | 2026-01-03 04:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:30.675997 | orchestrator | 2026-01-03 04:10:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:30.677035 | orchestrator | 2026-01-03 04:10:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:30.677079 | orchestrator | 2026-01-03 04:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:33.727938 | orchestrator | 2026-01-03 04:10:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:33.730465 | orchestrator | 2026-01-03 04:10:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:33.730595 | orchestrator | 2026-01-03 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:36.780951 | orchestrator | 2026-01-03 04:10:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:36.782921 | orchestrator | 2026-01-03 04:10:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:36.782975 | orchestrator | 2026-01-03 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:39.831295 | orchestrator | 2026-01-03 04:10:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:39.832782 | orchestrator | 2026-01-03 04:10:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:39.832819 | orchestrator | 2026-01-03 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:42.878857 | orchestrator | 2026-01-03 04:10:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:42.880803 | orchestrator | 2026-01-03 04:10:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:42.880870 | orchestrator | 2026-01-03 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:45.928051 | orchestrator | 2026-01-03 04:10:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:45.929616 | orchestrator | 2026-01-03 04:10:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:45.929797 | orchestrator | 2026-01-03 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:48.978114 | orchestrator | 2026-01-03 04:10:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:48.979592 | orchestrator | 2026-01-03 04:10:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:48.979674 | orchestrator | 2026-01-03 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:52.024318 | orchestrator | 2026-01-03 04:10:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:52.025755 | orchestrator | 2026-01-03 04:10:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:52.025867 | orchestrator | 2026-01-03 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:55.079120 | orchestrator | 2026-01-03 04:10:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:55.082081 | orchestrator | 2026-01-03 04:10:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:55.082223 | orchestrator | 2026-01-03 04:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:58.124141 | orchestrator | 2026-01-03 04:10:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:10:58.125376 | orchestrator | 2026-01-03 04:10:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:10:58.125408 | orchestrator | 2026-01-03 04:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:01.173711 | orchestrator | 2026-01-03 04:11:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:01.174770 | orchestrator | 2026-01-03 04:11:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:01.174800 | orchestrator | 2026-01-03 04:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:04.212979 | orchestrator | 2026-01-03 04:11:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:04.214528 | orchestrator | 2026-01-03 04:11:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:04.214597 | orchestrator | 2026-01-03 04:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:07.265197 | orchestrator | 2026-01-03 04:11:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:07.266558 | orchestrator | 2026-01-03 04:11:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:07.266663 | orchestrator | 2026-01-03 04:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:10.313188 | orchestrator | 2026-01-03 04:11:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:10.314217 | orchestrator | 2026-01-03 04:11:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:10.314263 | orchestrator | 2026-01-03 04:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:13.360992 | orchestrator | 2026-01-03 04:11:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:13.362600 | orchestrator | 2026-01-03 04:11:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:13.362674 | orchestrator | 2026-01-03 04:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:16.407675 | orchestrator | 2026-01-03 04:11:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:16.409317 | orchestrator | 2026-01-03 04:11:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:16.409387 | orchestrator | 2026-01-03 04:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:19.456568 | orchestrator | 2026-01-03 04:11:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:19.458794 | orchestrator | 2026-01-03 04:11:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:19.458848 | orchestrator | 2026-01-03 04:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:22.504588 | orchestrator | 2026-01-03 04:11:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:22.505952 | orchestrator | 2026-01-03 04:11:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:22.505994 | orchestrator | 2026-01-03 04:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:25.546975 | orchestrator | 2026-01-03 04:11:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:25.548398 | orchestrator | 2026-01-03 04:11:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:25.548558 | orchestrator | 2026-01-03 04:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:28.591203 | orchestrator | 2026-01-03 04:11:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:28.592527 | orchestrator | 2026-01-03 04:11:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:28.592582 | orchestrator | 2026-01-03 04:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:31.638964 | orchestrator | 2026-01-03 04:11:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:31.641313 | orchestrator | 2026-01-03 04:11:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:31.641439 | orchestrator | 2026-01-03 04:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:34.692035 | orchestrator | 2026-01-03 04:11:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:34.693855 | orchestrator | 2026-01-03 04:11:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:34.693946 | orchestrator | 2026-01-03 04:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:37.745234 | orchestrator | 2026-01-03 04:11:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:37.746403 | orchestrator | 2026-01-03 04:11:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:37.746459 | orchestrator | 2026-01-03 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:40.792526 | orchestrator | 2026-01-03 04:11:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:40.793535 | orchestrator | 2026-01-03 04:11:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:40.793703 | orchestrator | 2026-01-03 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:43.851152 | orchestrator | 2026-01-03 04:11:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:43.853067 | orchestrator | 2026-01-03 04:11:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:43.853451 | orchestrator | 2026-01-03 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:46.899517 | orchestrator | 2026-01-03 04:11:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:46.901088 | orchestrator | 2026-01-03 04:11:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:46.901121 | orchestrator | 2026-01-03 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:49.948876 | orchestrator | 2026-01-03 04:11:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:49.949778 | orchestrator | 2026-01-03 04:11:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:49.950389 | orchestrator | 2026-01-03 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:53.010783 | orchestrator | 2026-01-03 04:11:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:53.012301 | orchestrator | 2026-01-03 04:11:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:53.013256 | orchestrator | 2026-01-03 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:56.055320 | orchestrator | 2026-01-03 04:11:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:56.057017 | orchestrator | 2026-01-03 04:11:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:56.057145 | orchestrator | 2026-01-03 04:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:59.101275 | orchestrator | 2026-01-03 04:11:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:11:59.102079 | orchestrator | 2026-01-03 04:11:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:11:59.102100 | orchestrator | 2026-01-03 04:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:02.141942 | orchestrator | 2026-01-03 04:12:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:02.144967 | orchestrator | 2026-01-03 04:12:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:02.144990 | orchestrator | 2026-01-03 04:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:05.187124 | orchestrator | 2026-01-03 04:12:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:05.189183 | orchestrator | 2026-01-03 04:12:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:05.189263 | orchestrator | 2026-01-03 04:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:08.241106 | orchestrator | 2026-01-03 04:12:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:08.244700 | orchestrator | 2026-01-03 04:12:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:08.244958 | orchestrator | 2026-01-03 04:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:11.297334 | orchestrator | 2026-01-03 04:12:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:11.299874 | orchestrator | 2026-01-03 04:12:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:11.299917 | orchestrator | 2026-01-03 04:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:14.344545 | orchestrator | 2026-01-03 04:12:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:14.347321 | orchestrator | 2026-01-03 04:12:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:14.347665 | orchestrator | 2026-01-03 04:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:17.398373 | orchestrator | 2026-01-03 04:12:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:17.399519 | orchestrator | 2026-01-03 04:12:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:17.399582 | orchestrator | 2026-01-03 04:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:20.448549 | orchestrator | 2026-01-03 04:12:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:20.450296 | orchestrator | 2026-01-03 04:12:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:20.450376 | orchestrator | 2026-01-03 04:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:23.504585 | orchestrator | 2026-01-03 04:12:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:23.507046 | orchestrator | 2026-01-03 04:12:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:23.507154 | orchestrator | 2026-01-03 04:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:26.555353 | orchestrator | 2026-01-03 04:12:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:26.556404 | orchestrator | 2026-01-03 04:12:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:26.556463 | orchestrator | 2026-01-03 04:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:29.604108 | orchestrator | 2026-01-03 04:12:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:29.606320 | orchestrator | 2026-01-03 04:12:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:29.606587 | orchestrator | 2026-01-03 04:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:32.652351 | orchestrator | 2026-01-03 04:12:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:32.654676 | orchestrator | 2026-01-03 04:12:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:32.654736 | orchestrator | 2026-01-03 04:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:35.702478 | orchestrator | 2026-01-03 04:12:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:35.706420 | orchestrator | 2026-01-03 04:12:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:35.706567 | orchestrator | 2026-01-03 04:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:38.752755 | orchestrator | 2026-01-03 04:12:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:38.754598 | orchestrator | 2026-01-03 04:12:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:38.754826 | orchestrator | 2026-01-03 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:41.811381 | orchestrator | 2026-01-03 04:12:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:41.814536 | orchestrator | 2026-01-03 04:12:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:41.814678 | orchestrator | 2026-01-03 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:44.861165 | orchestrator | 2026-01-03 04:12:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:44.864467 | orchestrator | 2026-01-03 04:12:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:44.864565 | orchestrator | 2026-01-03 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:47.911461 | orchestrator | 2026-01-03 04:12:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:47.914442 | orchestrator | 2026-01-03 04:12:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:47.914507 | orchestrator | 2026-01-03 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:50.966249 | orchestrator | 2026-01-03 04:12:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:50.968098 | orchestrator | 2026-01-03 04:12:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:50.968193 | orchestrator | 2026-01-03 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:54.026226 | orchestrator | 2026-01-03 04:12:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:54.028864 | orchestrator | 2026-01-03 04:12:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:54.028910 | orchestrator | 2026-01-03 04:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:57.081739 | orchestrator | 2026-01-03 04:12:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:12:57.083956 | orchestrator | 2026-01-03 04:12:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:12:57.084012 | orchestrator | 2026-01-03 04:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:00.128144 | orchestrator | 2026-01-03 04:13:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:00.129746 | orchestrator | 2026-01-03 04:13:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:00.129769 | orchestrator | 2026-01-03 04:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:03.179416 | orchestrator | 2026-01-03 04:13:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:03.181999 | orchestrator | 2026-01-03 04:13:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:03.182112 | orchestrator | 2026-01-03 04:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:06.227839 | orchestrator | 2026-01-03 04:13:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:06.229921 | orchestrator | 2026-01-03 04:13:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:06.229966 | orchestrator | 2026-01-03 04:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:09.282237 | orchestrator | 2026-01-03 04:13:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:09.283444 | orchestrator | 2026-01-03 04:13:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:09.283532 | orchestrator | 2026-01-03 04:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:12.334532 | orchestrator | 2026-01-03 04:13:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:12.336495 | orchestrator | 2026-01-03 04:13:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:12.336646 | orchestrator | 2026-01-03 04:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:15.382181 | orchestrator | 2026-01-03 04:13:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:15.385400 | orchestrator | 2026-01-03 04:13:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:15.385479 | orchestrator | 2026-01-03 04:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:18.434928 | orchestrator | 2026-01-03 04:13:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:18.435987 | orchestrator | 2026-01-03 04:13:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:18.436016 | orchestrator | 2026-01-03 04:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:21.483210 | orchestrator | 2026-01-03 04:13:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:21.485095 | orchestrator | 2026-01-03 04:13:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:21.485195 | orchestrator | 2026-01-03 04:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:24.533299 | orchestrator | 2026-01-03 04:13:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:24.538697 | orchestrator | 2026-01-03 04:13:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:24.538777 | orchestrator | 2026-01-03 04:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:27.585352 | orchestrator | 2026-01-03 04:13:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:27.587101 | orchestrator | 2026-01-03 04:13:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:27.587221 | orchestrator | 2026-01-03 04:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:30.636255 | orchestrator | 2026-01-03 04:13:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:30.639342 | orchestrator | 2026-01-03 04:13:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:30.639405 | orchestrator | 2026-01-03 04:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:33.690861 | orchestrator | 2026-01-03 04:13:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:33.693117 | orchestrator | 2026-01-03 04:13:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:33.693237 | orchestrator | 2026-01-03 04:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:36.744768 | orchestrator | 2026-01-03 04:13:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:36.747089 | orchestrator | 2026-01-03 04:13:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:36.747395 | orchestrator | 2026-01-03 04:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:39.800994 | orchestrator | 2026-01-03 04:13:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:39.802653 | orchestrator | 2026-01-03 04:13:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:39.803097 | orchestrator | 2026-01-03 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:42.852029 | orchestrator | 2026-01-03 04:13:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:42.853057 | orchestrator | 2026-01-03 04:13:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:42.853161 | orchestrator | 2026-01-03 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:45.897977 | orchestrator | 2026-01-03 04:13:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:45.900776 | orchestrator | 2026-01-03 04:13:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:45.900849 | orchestrator | 2026-01-03 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:48.954443 | orchestrator | 2026-01-03 04:13:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:48.957320 | orchestrator | 2026-01-03 04:13:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:48.957465 | orchestrator | 2026-01-03 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:52.007330 | orchestrator | 2026-01-03 04:13:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:52.008482 | orchestrator | 2026-01-03 04:13:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:52.008561 | orchestrator | 2026-01-03 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:55.063709 | orchestrator | 2026-01-03 04:13:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:55.065140 | orchestrator | 2026-01-03 04:13:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:55.065233 | orchestrator | 2026-01-03 04:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:58.112260 | orchestrator | 2026-01-03 04:13:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:13:58.114339 | orchestrator | 2026-01-03 04:13:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:13:58.114423 | orchestrator | 2026-01-03 04:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:01.169314 | orchestrator | 2026-01-03 04:14:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:01.170498 | orchestrator | 2026-01-03 04:14:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:01.170541 | orchestrator | 2026-01-03 04:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:04.215401 | orchestrator | 2026-01-03 04:14:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:04.218081 | orchestrator | 2026-01-03 04:14:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:04.218142 | orchestrator | 2026-01-03 04:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:07.267557 | orchestrator | 2026-01-03 04:14:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:07.269990 | orchestrator | 2026-01-03 04:14:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:07.270110 | orchestrator | 2026-01-03 04:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:10.319157 | orchestrator | 2026-01-03 04:14:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:10.320257 | orchestrator | 2026-01-03 04:14:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:10.320341 | orchestrator | 2026-01-03 04:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:13.370790 | orchestrator | 2026-01-03 04:14:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:13.372804 | orchestrator | 2026-01-03 04:14:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:13.372877 | orchestrator | 2026-01-03 04:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:16.424914 | orchestrator | 2026-01-03 04:14:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:16.425970 | orchestrator | 2026-01-03 04:14:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:16.425995 | orchestrator | 2026-01-03 04:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:19.473055 | orchestrator | 2026-01-03 04:14:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:19.475472 | orchestrator | 2026-01-03 04:14:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:19.475570 | orchestrator | 2026-01-03 04:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:22.521073 | orchestrator | 2026-01-03 04:14:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:22.522376 | orchestrator | 2026-01-03 04:14:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:22.522830 | orchestrator | 2026-01-03 04:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:25.563429 | orchestrator | 2026-01-03 04:14:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:25.565208 | orchestrator | 2026-01-03 04:14:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:25.565258 | orchestrator | 2026-01-03 04:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:28.611713 | orchestrator | 2026-01-03 04:14:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:28.614351 | orchestrator | 2026-01-03 04:14:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:28.614428 | orchestrator | 2026-01-03 04:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:31.658799 | orchestrator | 2026-01-03 04:14:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:31.659277 | orchestrator | 2026-01-03 04:14:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:31.659312 | orchestrator | 2026-01-03 04:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:34.706366 | orchestrator | 2026-01-03 04:14:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:34.707994 | orchestrator | 2026-01-03 04:14:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:34.708147 | orchestrator | 2026-01-03 04:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:37.757180 | orchestrator | 2026-01-03 04:14:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:37.759438 | orchestrator | 2026-01-03 04:14:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:37.759527 | orchestrator | 2026-01-03 04:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:40.804962 | orchestrator | 2026-01-03 04:14:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:40.806811 | orchestrator | 2026-01-03 04:14:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:40.806910 | orchestrator | 2026-01-03 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:43.853795 | orchestrator | 2026-01-03 04:14:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:43.856277 | orchestrator | 2026-01-03 04:14:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:43.856354 | orchestrator | 2026-01-03 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:46.902331 | orchestrator | 2026-01-03 04:14:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:46.905286 | orchestrator | 2026-01-03 04:14:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:46.905357 | orchestrator | 2026-01-03 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:49.954207 | orchestrator | 2026-01-03 04:14:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:49.956361 | orchestrator | 2026-01-03 04:14:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:49.956428 | orchestrator | 2026-01-03 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:53.000850 | orchestrator | 2026-01-03 04:14:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:53.015069 | orchestrator | 2026-01-03 04:14:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:53.015147 | orchestrator | 2026-01-03 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:56.059260 | orchestrator | 2026-01-03 04:14:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:56.060940 | orchestrator | 2026-01-03 04:14:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:56.061004 | orchestrator | 2026-01-03 04:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:59.112966 | orchestrator | 2026-01-03 04:14:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:14:59.113847 | orchestrator | 2026-01-03 04:14:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:14:59.114353 | orchestrator | 2026-01-03 04:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:02.161929 | orchestrator | 2026-01-03 04:15:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:02.163171 | orchestrator | 2026-01-03 04:15:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:02.163330 | orchestrator | 2026-01-03 04:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:05.212509 | orchestrator | 2026-01-03 04:15:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:05.214545 | orchestrator | 2026-01-03 04:15:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:05.214666 | orchestrator | 2026-01-03 04:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:08.262388 | orchestrator | 2026-01-03 04:15:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:08.264319 | orchestrator | 2026-01-03 04:15:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:08.264392 | orchestrator | 2026-01-03 04:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:11.308535 | orchestrator | 2026-01-03 04:15:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:11.310661 | orchestrator | 2026-01-03 04:15:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:11.310791 | orchestrator | 2026-01-03 04:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:14.362827 | orchestrator | 2026-01-03 04:15:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:14.364672 | orchestrator | 2026-01-03 04:15:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:14.364804 | orchestrator | 2026-01-03 04:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:17.410073 | orchestrator | 2026-01-03 04:15:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:17.411493 | orchestrator | 2026-01-03 04:15:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:17.411526 | orchestrator | 2026-01-03 04:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:20.456550 | orchestrator | 2026-01-03 04:15:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:20.459379 | orchestrator | 2026-01-03 04:15:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:20.459704 | orchestrator | 2026-01-03 04:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:23.508628 | orchestrator | 2026-01-03 04:15:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:23.509868 | orchestrator | 2026-01-03 04:15:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:23.509957 | orchestrator | 2026-01-03 04:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:26.556015 | orchestrator | 2026-01-03 04:15:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:26.557237 | orchestrator | 2026-01-03 04:15:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:26.557422 | orchestrator | 2026-01-03 04:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:29.609129 | orchestrator | 2026-01-03 04:15:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:29.610727 | orchestrator | 2026-01-03 04:15:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:29.610941 | orchestrator | 2026-01-03 04:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:32.667316 | orchestrator | 2026-01-03 04:15:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:32.669709 | orchestrator | 2026-01-03 04:15:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:32.669758 | orchestrator | 2026-01-03 04:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:35.716638 | orchestrator | 2026-01-03 04:15:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:35.718134 | orchestrator | 2026-01-03 04:15:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:35.718315 | orchestrator | 2026-01-03 04:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:38.768191 | orchestrator | 2026-01-03 04:15:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:38.770546 | orchestrator | 2026-01-03 04:15:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:38.770729 | orchestrator | 2026-01-03 04:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:41.819206 | orchestrator | 2026-01-03 04:15:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:41.820406 | orchestrator | 2026-01-03 04:15:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:41.820500 | orchestrator | 2026-01-03 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:44.864863 | orchestrator | 2026-01-03 04:15:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:44.866989 | orchestrator | 2026-01-03 04:15:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:44.867051 | orchestrator | 2026-01-03 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:47.918206 | orchestrator | 2026-01-03 04:15:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:47.920754 | orchestrator | 2026-01-03 04:15:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:47.920806 | orchestrator | 2026-01-03 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:50.969305 | orchestrator | 2026-01-03 04:15:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:50.970663 | orchestrator | 2026-01-03 04:15:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:50.970703 | orchestrator | 2026-01-03 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:54.030640 | orchestrator | 2026-01-03 04:15:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:54.032690 | orchestrator | 2026-01-03 04:15:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:54.032772 | orchestrator | 2026-01-03 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:57.088807 | orchestrator | 2026-01-03 04:15:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:15:57.094933 | orchestrator | 2026-01-03 04:15:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:15:57.095008 | orchestrator | 2026-01-03 04:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:00.133233 | orchestrator | 2026-01-03 04:16:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:00.135166 | orchestrator | 2026-01-03 04:16:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:00.135270 | orchestrator | 2026-01-03 04:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:03.180342 | orchestrator | 2026-01-03 04:16:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:03.182357 | orchestrator | 2026-01-03 04:16:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:03.182390 | orchestrator | 2026-01-03 04:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:06.228911 | orchestrator | 2026-01-03 04:16:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:06.230354 | orchestrator | 2026-01-03 04:16:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:06.230633 | orchestrator | 2026-01-03 04:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:09.280563 | orchestrator | 2026-01-03 04:16:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:09.281765 | orchestrator | 2026-01-03 04:16:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:09.281921 | orchestrator | 2026-01-03 04:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:12.331880 | orchestrator | 2026-01-03 04:16:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:12.332330 | orchestrator | 2026-01-03 04:16:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:12.332841 | orchestrator | 2026-01-03 04:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:15.384129 | orchestrator | 2026-01-03 04:16:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:15.385395 | orchestrator | 2026-01-03 04:16:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:15.385476 | orchestrator | 2026-01-03 04:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:18.435010 | orchestrator | 2026-01-03 04:16:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:18.437159 | orchestrator | 2026-01-03 04:16:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:18.437250 | orchestrator | 2026-01-03 04:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:21.483992 | orchestrator | 2026-01-03 04:16:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:21.485295 | orchestrator | 2026-01-03 04:16:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:21.485775 | orchestrator | 2026-01-03 04:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:24.537147 | orchestrator | 2026-01-03 04:16:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:24.538952 | orchestrator | 2026-01-03 04:16:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:24.539052 | orchestrator | 2026-01-03 04:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:27.591108 | orchestrator | 2026-01-03 04:16:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:27.592337 | orchestrator | 2026-01-03 04:16:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:27.592606 | orchestrator | 2026-01-03 04:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:30.645985 | orchestrator | 2026-01-03 04:16:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:30.647257 | orchestrator | 2026-01-03 04:16:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:30.647542 | orchestrator | 2026-01-03 04:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:33.697041 | orchestrator | 2026-01-03 04:16:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:33.698676 | orchestrator | 2026-01-03 04:16:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:33.699228 | orchestrator | 2026-01-03 04:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:36.764251 | orchestrator | 2026-01-03 04:16:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:36.765323 | orchestrator | 2026-01-03 04:16:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:36.765376 | orchestrator | 2026-01-03 04:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:39.823053 | orchestrator | 2026-01-03 04:16:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:39.824136 | orchestrator | 2026-01-03 04:16:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:39.824260 | orchestrator | 2026-01-03 04:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:42.875818 | orchestrator | 2026-01-03 04:16:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:42.877629 | orchestrator | 2026-01-03 04:16:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:42.877691 | orchestrator | 2026-01-03 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:45.927069 | orchestrator | 2026-01-03 04:16:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:45.928533 | orchestrator | 2026-01-03 04:16:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:45.928636 | orchestrator | 2026-01-03 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:48.982051 | orchestrator | 2026-01-03 04:16:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:48.986393 | orchestrator | 2026-01-03 04:16:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:48.986468 | orchestrator | 2026-01-03 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:52.037815 | orchestrator | 2026-01-03 04:16:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:52.039768 | orchestrator | 2026-01-03 04:16:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:52.040806 | orchestrator | 2026-01-03 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:55.090615 | orchestrator | 2026-01-03 04:16:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:55.092535 | orchestrator | 2026-01-03 04:16:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:55.092735 | orchestrator | 2026-01-03 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:58.135188 | orchestrator | 2026-01-03 04:16:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:16:58.137244 | orchestrator | 2026-01-03 04:16:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:16:58.137647 | orchestrator | 2026-01-03 04:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:01.180829 | orchestrator | 2026-01-03 04:17:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:01.182484 | orchestrator | 2026-01-03 04:17:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:01.182631 | orchestrator | 2026-01-03 04:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:04.233215 | orchestrator | 2026-01-03 04:17:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:04.235548 | orchestrator | 2026-01-03 04:17:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:04.235649 | orchestrator | 2026-01-03 04:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:07.284469 | orchestrator | 2026-01-03 04:17:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:07.286342 | orchestrator | 2026-01-03 04:17:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:07.286391 | orchestrator | 2026-01-03 04:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:10.341382 | orchestrator | 2026-01-03 04:17:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:10.344268 | orchestrator | 2026-01-03 04:17:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:10.344598 | orchestrator | 2026-01-03 04:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:13.392538 | orchestrator | 2026-01-03 04:17:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:13.395616 | orchestrator | 2026-01-03 04:17:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:13.395658 | orchestrator | 2026-01-03 04:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:16.446066 | orchestrator | 2026-01-03 04:17:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:16.448045 | orchestrator | 2026-01-03 04:17:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:16.448074 | orchestrator | 2026-01-03 04:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:19.494634 | orchestrator | 2026-01-03 04:17:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:19.496438 | orchestrator | 2026-01-03 04:17:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:19.496633 | orchestrator | 2026-01-03 04:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:22.546406 | orchestrator | 2026-01-03 04:17:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:22.547647 | orchestrator | 2026-01-03 04:17:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:22.547710 | orchestrator | 2026-01-03 04:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:25.598294 | orchestrator | 2026-01-03 04:17:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:25.600802 | orchestrator | 2026-01-03 04:17:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:25.600872 | orchestrator | 2026-01-03 04:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:28.651442 | orchestrator | 2026-01-03 04:17:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:28.652837 | orchestrator | 2026-01-03 04:17:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:28.652888 | orchestrator | 2026-01-03 04:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:31.698130 | orchestrator | 2026-01-03 04:17:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:31.699768 | orchestrator | 2026-01-03 04:17:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:31.699833 | orchestrator | 2026-01-03 04:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:34.746340 | orchestrator | 2026-01-03 04:17:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:34.748784 | orchestrator | 2026-01-03 04:17:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:34.748859 | orchestrator | 2026-01-03 04:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:37.795958 | orchestrator | 2026-01-03 04:17:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:37.797690 | orchestrator | 2026-01-03 04:17:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:37.797724 | orchestrator | 2026-01-03 04:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:40.849139 | orchestrator | 2026-01-03 04:17:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:40.852197 | orchestrator | 2026-01-03 04:17:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:40.852308 | orchestrator | 2026-01-03 04:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:43.900379 | orchestrator | 2026-01-03 04:17:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:43.902442 | orchestrator | 2026-01-03 04:17:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:43.902486 | orchestrator | 2026-01-03 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:46.950763 | orchestrator | 2026-01-03 04:17:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:46.953508 | orchestrator | 2026-01-03 04:17:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:46.953771 | orchestrator | 2026-01-03 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:49.991110 | orchestrator | 2026-01-03 04:17:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:49.992734 | orchestrator | 2026-01-03 04:17:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:49.992824 | orchestrator | 2026-01-03 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:53.039515 | orchestrator | 2026-01-03 04:17:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:53.041696 | orchestrator | 2026-01-03 04:17:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:53.041760 | orchestrator | 2026-01-03 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:56.091715 | orchestrator | 2026-01-03 04:17:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:56.093634 | orchestrator | 2026-01-03 04:17:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:56.093673 | orchestrator | 2026-01-03 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:59.140785 | orchestrator | 2026-01-03 04:17:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:17:59.141869 | orchestrator | 2026-01-03 04:17:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:17:59.141980 | orchestrator | 2026-01-03 04:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:02.187538 | orchestrator | 2026-01-03 04:18:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:02.187704 | orchestrator | 2026-01-03 04:18:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:02.187727 | orchestrator | 2026-01-03 04:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:05.238925 | orchestrator | 2026-01-03 04:18:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:05.240739 | orchestrator | 2026-01-03 04:18:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:05.240785 | orchestrator | 2026-01-03 04:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:08.285465 | orchestrator | 2026-01-03 04:18:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:08.287210 | orchestrator | 2026-01-03 04:18:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:08.287246 | orchestrator | 2026-01-03 04:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:11.336354 | orchestrator | 2026-01-03 04:18:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:11.338394 | orchestrator | 2026-01-03 04:18:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:11.338536 | orchestrator | 2026-01-03 04:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:14.384327 | orchestrator | 2026-01-03 04:18:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:14.386849 | orchestrator | 2026-01-03 04:18:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:14.386925 | orchestrator | 2026-01-03 04:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:17.431859 | orchestrator | 2026-01-03 04:18:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:17.433891 | orchestrator | 2026-01-03 04:18:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:17.433966 | orchestrator | 2026-01-03 04:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:20.476757 | orchestrator | 2026-01-03 04:18:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:20.479328 | orchestrator | 2026-01-03 04:18:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:20.479403 | orchestrator | 2026-01-03 04:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:23.518456 | orchestrator | 2026-01-03 04:18:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:23.519862 | orchestrator | 2026-01-03 04:18:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:23.520119 | orchestrator | 2026-01-03 04:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:26.569072 | orchestrator | 2026-01-03 04:18:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:26.570495 | orchestrator | 2026-01-03 04:18:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:26.570559 | orchestrator | 2026-01-03 04:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:29.613352 | orchestrator | 2026-01-03 04:18:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:29.615745 | orchestrator | 2026-01-03 04:18:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:29.615836 | orchestrator | 2026-01-03 04:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:32.660035 | orchestrator | 2026-01-03 04:18:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:32.661391 | orchestrator | 2026-01-03 04:18:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:32.661531 | orchestrator | 2026-01-03 04:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:35.706424 | orchestrator | 2026-01-03 04:18:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:35.707490 | orchestrator | 2026-01-03 04:18:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:35.707753 | orchestrator | 2026-01-03 04:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:38.756029 | orchestrator | 2026-01-03 04:18:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:38.758071 | orchestrator | 2026-01-03 04:18:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:38.758105 | orchestrator | 2026-01-03 04:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:41.808786 | orchestrator | 2026-01-03 04:18:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:41.809803 | orchestrator | 2026-01-03 04:18:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:41.809857 | orchestrator | 2026-01-03 04:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:44.855730 | orchestrator | 2026-01-03 04:18:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:44.857278 | orchestrator | 2026-01-03 04:18:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:44.857361 | orchestrator | 2026-01-03 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:47.898918 | orchestrator | 2026-01-03 04:18:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:47.900593 | orchestrator | 2026-01-03 04:18:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:47.900674 | orchestrator | 2026-01-03 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:50.945089 | orchestrator | 2026-01-03 04:18:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:50.948405 | orchestrator | 2026-01-03 04:18:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:50.948524 | orchestrator | 2026-01-03 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:53.999479 | orchestrator | 2026-01-03 04:18:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:54.001762 | orchestrator | 2026-01-03 04:18:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:54.002548 | orchestrator | 2026-01-03 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:57.052531 | orchestrator | 2026-01-03 04:18:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:18:57.057104 | orchestrator | 2026-01-03 04:18:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:18:57.057244 | orchestrator | 2026-01-03 04:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:00.089178 | orchestrator | 2026-01-03 04:19:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:00.090703 | orchestrator | 2026-01-03 04:19:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:00.090750 | orchestrator | 2026-01-03 04:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:03.130312 | orchestrator | 2026-01-03 04:19:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:03.132246 | orchestrator | 2026-01-03 04:19:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:03.132294 | orchestrator | 2026-01-03 04:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:06.176711 | orchestrator | 2026-01-03 04:19:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:06.179019 | orchestrator | 2026-01-03 04:19:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:06.179094 | orchestrator | 2026-01-03 04:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:09.225714 | orchestrator | 2026-01-03 04:19:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:09.229199 | orchestrator | 2026-01-03 04:19:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:09.229342 | orchestrator | 2026-01-03 04:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:12.282837 | orchestrator | 2026-01-03 04:19:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:12.285329 | orchestrator | 2026-01-03 04:19:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:12.285409 | orchestrator | 2026-01-03 04:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:15.334221 | orchestrator | 2026-01-03 04:19:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:15.337543 | orchestrator | 2026-01-03 04:19:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:15.337704 | orchestrator | 2026-01-03 04:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:18.383785 | orchestrator | 2026-01-03 04:19:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:18.386359 | orchestrator | 2026-01-03 04:19:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:18.386417 | orchestrator | 2026-01-03 04:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:21.436541 | orchestrator | 2026-01-03 04:19:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:21.439638 | orchestrator | 2026-01-03 04:19:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:21.439848 | orchestrator | 2026-01-03 04:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:24.491230 | orchestrator | 2026-01-03 04:19:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:24.492661 | orchestrator | 2026-01-03 04:19:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:24.492742 | orchestrator | 2026-01-03 04:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:27.543397 | orchestrator | 2026-01-03 04:19:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:27.546008 | orchestrator | 2026-01-03 04:19:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:27.546114 | orchestrator | 2026-01-03 04:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:30.589891 | orchestrator | 2026-01-03 04:19:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:30.590222 | orchestrator | 2026-01-03 04:19:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:30.590249 | orchestrator | 2026-01-03 04:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:33.635072 | orchestrator | 2026-01-03 04:19:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:33.637525 | orchestrator | 2026-01-03 04:19:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:33.637705 | orchestrator | 2026-01-03 04:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:36.684235 | orchestrator | 2026-01-03 04:19:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:36.686989 | orchestrator | 2026-01-03 04:19:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:36.687143 | orchestrator | 2026-01-03 04:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:39.734087 | orchestrator | 2026-01-03 04:19:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:39.736151 | orchestrator | 2026-01-03 04:19:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:39.736189 | orchestrator | 2026-01-03 04:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:42.783273 | orchestrator | 2026-01-03 04:19:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:42.785380 | orchestrator | 2026-01-03 04:19:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:42.785457 | orchestrator | 2026-01-03 04:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:45.829716 | orchestrator | 2026-01-03 04:19:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:45.831116 | orchestrator | 2026-01-03 04:19:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:45.831196 | orchestrator | 2026-01-03 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:48.878741 | orchestrator | 2026-01-03 04:19:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:48.880445 | orchestrator | 2026-01-03 04:19:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:48.880489 | orchestrator | 2026-01-03 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:51.924054 | orchestrator | 2026-01-03 04:19:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:51.926002 | orchestrator | 2026-01-03 04:19:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:51.926288 | orchestrator | 2026-01-03 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:54.972183 | orchestrator | 2026-01-03 04:19:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:54.974388 | orchestrator | 2026-01-03 04:19:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:54.974467 | orchestrator | 2026-01-03 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:58.018750 | orchestrator | 2026-01-03 04:19:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:19:58.020440 | orchestrator | 2026-01-03 04:19:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:19:58.020807 | orchestrator | 2026-01-03 04:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:01.058731 | orchestrator | 2026-01-03 04:20:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:01.060700 | orchestrator | 2026-01-03 04:20:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:01.060770 | orchestrator | 2026-01-03 04:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:04.103224 | orchestrator | 2026-01-03 04:20:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:04.105333 | orchestrator | 2026-01-03 04:20:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:04.105398 | orchestrator | 2026-01-03 04:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:07.148784 | orchestrator | 2026-01-03 04:20:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:07.151815 | orchestrator | 2026-01-03 04:20:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:07.151869 | orchestrator | 2026-01-03 04:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:10.197418 | orchestrator | 2026-01-03 04:20:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:10.198336 | orchestrator | 2026-01-03 04:20:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:10.198404 | orchestrator | 2026-01-03 04:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:13.244921 | orchestrator | 2026-01-03 04:20:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:13.247550 | orchestrator | 2026-01-03 04:20:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:13.247732 | orchestrator | 2026-01-03 04:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:16.291276 | orchestrator | 2026-01-03 04:20:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:16.292995 | orchestrator | 2026-01-03 04:20:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:16.293144 | orchestrator | 2026-01-03 04:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:19.334498 | orchestrator | 2026-01-03 04:20:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:19.336453 | orchestrator | 2026-01-03 04:20:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:19.336550 | orchestrator | 2026-01-03 04:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:22.378745 | orchestrator | 2026-01-03 04:20:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:22.380484 | orchestrator | 2026-01-03 04:20:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:22.380566 | orchestrator | 2026-01-03 04:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:25.425117 | orchestrator | 2026-01-03 04:20:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:25.427797 | orchestrator | 2026-01-03 04:20:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:25.427877 | orchestrator | 2026-01-03 04:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:28.478673 | orchestrator | 2026-01-03 04:20:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:28.480687 | orchestrator | 2026-01-03 04:20:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:28.480783 | orchestrator | 2026-01-03 04:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:31.524961 | orchestrator | 2026-01-03 04:20:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:31.526309 | orchestrator | 2026-01-03 04:20:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:31.526369 | orchestrator | 2026-01-03 04:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:34.574140 | orchestrator | 2026-01-03 04:20:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:34.575236 | orchestrator | 2026-01-03 04:20:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:34.575387 | orchestrator | 2026-01-03 04:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:37.618801 | orchestrator | 2026-01-03 04:20:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:37.621128 | orchestrator | 2026-01-03 04:20:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:37.621186 | orchestrator | 2026-01-03 04:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:40.674460 | orchestrator | 2026-01-03 04:20:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:40.676258 | orchestrator | 2026-01-03 04:20:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:40.676301 | orchestrator | 2026-01-03 04:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:43.722449 | orchestrator | 2026-01-03 04:20:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:43.723454 | orchestrator | 2026-01-03 04:20:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:43.723515 | orchestrator | 2026-01-03 04:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:46.771329 | orchestrator | 2026-01-03 04:20:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:46.773057 | orchestrator | 2026-01-03 04:20:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:46.773294 | orchestrator | 2026-01-03 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:49.818248 | orchestrator | 2026-01-03 04:20:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:49.820421 | orchestrator | 2026-01-03 04:20:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:49.820554 | orchestrator | 2026-01-03 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:52.864178 | orchestrator | 2026-01-03 04:20:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:52.866480 | orchestrator | 2026-01-03 04:20:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:52.866553 | orchestrator | 2026-01-03 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:55.908238 | orchestrator | 2026-01-03 04:20:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:55.908792 | orchestrator | 2026-01-03 04:20:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:55.908830 | orchestrator | 2026-01-03 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:58.955627 | orchestrator | 2026-01-03 04:20:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:20:58.956946 | orchestrator | 2026-01-03 04:20:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:20:58.957022 | orchestrator | 2026-01-03 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:02.000203 | orchestrator | 2026-01-03 04:21:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:02.002379 | orchestrator | 2026-01-03 04:21:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:02.002663 | orchestrator | 2026-01-03 04:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:05.057851 | orchestrator | 2026-01-03 04:21:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:05.060059 | orchestrator | 2026-01-03 04:21:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:05.060195 | orchestrator | 2026-01-03 04:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:08.109606 | orchestrator | 2026-01-03 04:21:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:08.111230 | orchestrator | 2026-01-03 04:21:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:08.111273 | orchestrator | 2026-01-03 04:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:11.156829 | orchestrator | 2026-01-03 04:21:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:11.158317 | orchestrator | 2026-01-03 04:21:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:11.158384 | orchestrator | 2026-01-03 04:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:14.202189 | orchestrator | 2026-01-03 04:21:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:14.203410 | orchestrator | 2026-01-03 04:21:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:14.203446 | orchestrator | 2026-01-03 04:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:17.246806 | orchestrator | 2026-01-03 04:21:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:17.247864 | orchestrator | 2026-01-03 04:21:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:17.247960 | orchestrator | 2026-01-03 04:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:20.299782 | orchestrator | 2026-01-03 04:21:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:20.299906 | orchestrator | 2026-01-03 04:21:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:20.299932 | orchestrator | 2026-01-03 04:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:23.350312 | orchestrator | 2026-01-03 04:21:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:23.351788 | orchestrator | 2026-01-03 04:21:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:23.351845 | orchestrator | 2026-01-03 04:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:26.396493 | orchestrator | 2026-01-03 04:21:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:26.398540 | orchestrator | 2026-01-03 04:21:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:26.398676 | orchestrator | 2026-01-03 04:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:29.447920 | orchestrator | 2026-01-03 04:21:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:29.449835 | orchestrator | 2026-01-03 04:21:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:29.449898 | orchestrator | 2026-01-03 04:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:32.495021 | orchestrator | 2026-01-03 04:21:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:32.496596 | orchestrator | 2026-01-03 04:21:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:32.497025 | orchestrator | 2026-01-03 04:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:35.541868 | orchestrator | 2026-01-03 04:21:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:35.544402 | orchestrator | 2026-01-03 04:21:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:35.544461 | orchestrator | 2026-01-03 04:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:38.591380 | orchestrator | 2026-01-03 04:21:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:38.593344 | orchestrator | 2026-01-03 04:21:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:38.593392 | orchestrator | 2026-01-03 04:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:41.640532 | orchestrator | 2026-01-03 04:21:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:41.642533 | orchestrator | 2026-01-03 04:21:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:41.642571 | orchestrator | 2026-01-03 04:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:44.691040 | orchestrator | 2026-01-03 04:21:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:44.692869 | orchestrator | 2026-01-03 04:21:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:44.692981 | orchestrator | 2026-01-03 04:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:47.733804 | orchestrator | 2026-01-03 04:21:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:47.735796 | orchestrator | 2026-01-03 04:21:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:47.735849 | orchestrator | 2026-01-03 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:50.778281 | orchestrator | 2026-01-03 04:21:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:50.778440 | orchestrator | 2026-01-03 04:21:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:50.778455 | orchestrator | 2026-01-03 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:53.825069 | orchestrator | 2026-01-03 04:21:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:53.827113 | orchestrator | 2026-01-03 04:21:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:53.827199 | orchestrator | 2026-01-03 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:56.873592 | orchestrator | 2026-01-03 04:21:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:56.875709 | orchestrator | 2026-01-03 04:21:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:56.875742 | orchestrator | 2026-01-03 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:59.925442 | orchestrator | 2026-01-03 04:21:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:21:59.927124 | orchestrator | 2026-01-03 04:21:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:21:59.927219 | orchestrator | 2026-01-03 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:02.973637 | orchestrator | 2026-01-03 04:22:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:02.974265 | orchestrator | 2026-01-03 04:22:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:02.974305 | orchestrator | 2026-01-03 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:06.021984 | orchestrator | 2026-01-03 04:22:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:06.023659 | orchestrator | 2026-01-03 04:22:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:06.023700 | orchestrator | 2026-01-03 04:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:09.069557 | orchestrator | 2026-01-03 04:22:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:09.073503 | orchestrator | 2026-01-03 04:22:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:09.073593 | orchestrator | 2026-01-03 04:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:12.121762 | orchestrator | 2026-01-03 04:22:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:12.124786 | orchestrator | 2026-01-03 04:22:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:12.124857 | orchestrator | 2026-01-03 04:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:15.183648 | orchestrator | 2026-01-03 04:22:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:15.186275 | orchestrator | 2026-01-03 04:22:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:15.186401 | orchestrator | 2026-01-03 04:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:18.235407 | orchestrator | 2026-01-03 04:22:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:18.360009 | orchestrator | 2026-01-03 04:22:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:18.360093 | orchestrator | 2026-01-03 04:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:21.282620 | orchestrator | 2026-01-03 04:22:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:21.285305 | orchestrator | 2026-01-03 04:22:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:21.285377 | orchestrator | 2026-01-03 04:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:24.330738 | orchestrator | 2026-01-03 04:22:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:24.332162 | orchestrator | 2026-01-03 04:22:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:24.332207 | orchestrator | 2026-01-03 04:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:27.382704 | orchestrator | 2026-01-03 04:22:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:27.384096 | orchestrator | 2026-01-03 04:22:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:27.384159 | orchestrator | 2026-01-03 04:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:30.445597 | orchestrator | 2026-01-03 04:22:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:30.449921 | orchestrator | 2026-01-03 04:22:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:30.450011 | orchestrator | 2026-01-03 04:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:33.494569 | orchestrator | 2026-01-03 04:22:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:33.495358 | orchestrator | 2026-01-03 04:22:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:33.495633 | orchestrator | 2026-01-03 04:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:36.541829 | orchestrator | 2026-01-03 04:22:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:36.545392 | orchestrator | 2026-01-03 04:22:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:36.545724 | orchestrator | 2026-01-03 04:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:39.597670 | orchestrator | 2026-01-03 04:22:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:39.599995 | orchestrator | 2026-01-03 04:22:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:39.600121 | orchestrator | 2026-01-03 04:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:42.646877 | orchestrator | 2026-01-03 04:22:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:42.648711 | orchestrator | 2026-01-03 04:22:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:42.648774 | orchestrator | 2026-01-03 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:45.700064 | orchestrator | 2026-01-03 04:22:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:45.702489 | orchestrator | 2026-01-03 04:22:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:45.702544 | orchestrator | 2026-01-03 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:48.743928 | orchestrator | 2026-01-03 04:22:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:48.745286 | orchestrator | 2026-01-03 04:22:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:48.745322 | orchestrator | 2026-01-03 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:51.792608 | orchestrator | 2026-01-03 04:22:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:51.794199 | orchestrator | 2026-01-03 04:22:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:51.794307 | orchestrator | 2026-01-03 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:54.839077 | orchestrator | 2026-01-03 04:22:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:54.840382 | orchestrator | 2026-01-03 04:22:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:54.840482 | orchestrator | 2026-01-03 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:57.888028 | orchestrator | 2026-01-03 04:22:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:22:57.890128 | orchestrator | 2026-01-03 04:22:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:22:57.890266 | orchestrator | 2026-01-03 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:00.936712 | orchestrator | 2026-01-03 04:23:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:00.938755 | orchestrator | 2026-01-03 04:23:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:00.938784 | orchestrator | 2026-01-03 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:03.986336 | orchestrator | 2026-01-03 04:23:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:03.988173 | orchestrator | 2026-01-03 04:23:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:03.988226 | orchestrator | 2026-01-03 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:07.032752 | orchestrator | 2026-01-03 04:23:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:07.034374 | orchestrator | 2026-01-03 04:23:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:07.034519 | orchestrator | 2026-01-03 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:10.083706 | orchestrator | 2026-01-03 04:23:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:10.086336 | orchestrator | 2026-01-03 04:23:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:10.086459 | orchestrator | 2026-01-03 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:13.131295 | orchestrator | 2026-01-03 04:23:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:13.132478 | orchestrator | 2026-01-03 04:23:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:13.132634 | orchestrator | 2026-01-03 04:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:16.183300 | orchestrator | 2026-01-03 04:23:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:16.184534 | orchestrator | 2026-01-03 04:23:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:16.184576 | orchestrator | 2026-01-03 04:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:19.229640 | orchestrator | 2026-01-03 04:23:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:19.231844 | orchestrator | 2026-01-03 04:23:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:19.231900 | orchestrator | 2026-01-03 04:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:22.278852 | orchestrator | 2026-01-03 04:23:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:22.280447 | orchestrator | 2026-01-03 04:23:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:22.280605 | orchestrator | 2026-01-03 04:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:25.326189 | orchestrator | 2026-01-03 04:23:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:25.327187 | orchestrator | 2026-01-03 04:23:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:25.327262 | orchestrator | 2026-01-03 04:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:28.371085 | orchestrator | 2026-01-03 04:23:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:28.373072 | orchestrator | 2026-01-03 04:23:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:28.373212 | orchestrator | 2026-01-03 04:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:31.419259 | orchestrator | 2026-01-03 04:23:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:31.421644 | orchestrator | 2026-01-03 04:23:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:31.421725 | orchestrator | 2026-01-03 04:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:34.469542 | orchestrator | 2026-01-03 04:23:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:34.472137 | orchestrator | 2026-01-03 04:23:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:34.472266 | orchestrator | 2026-01-03 04:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:37.515972 | orchestrator | 2026-01-03 04:23:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:37.517240 | orchestrator | 2026-01-03 04:23:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:37.517305 | orchestrator | 2026-01-03 04:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:40.563960 | orchestrator | 2026-01-03 04:23:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:40.565473 | orchestrator | 2026-01-03 04:23:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:40.565712 | orchestrator | 2026-01-03 04:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:43.611048 | orchestrator | 2026-01-03 04:23:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:43.611798 | orchestrator | 2026-01-03 04:23:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:43.611835 | orchestrator | 2026-01-03 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:46.659846 | orchestrator | 2026-01-03 04:23:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:46.661318 | orchestrator | 2026-01-03 04:23:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:46.661478 | orchestrator | 2026-01-03 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:49.705304 | orchestrator | 2026-01-03 04:23:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:49.707503 | orchestrator | 2026-01-03 04:23:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:49.707538 | orchestrator | 2026-01-03 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:52.755590 | orchestrator | 2026-01-03 04:23:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:52.757864 | orchestrator | 2026-01-03 04:23:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:52.757969 | orchestrator | 2026-01-03 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:55.798419 | orchestrator | 2026-01-03 04:23:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:55.799574 | orchestrator | 2026-01-03 04:23:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:55.799713 | orchestrator | 2026-01-03 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:58.838500 | orchestrator | 2026-01-03 04:23:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:23:58.841573 | orchestrator | 2026-01-03 04:23:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:23:58.841707 | orchestrator | 2026-01-03 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:01.894685 | orchestrator | 2026-01-03 04:24:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:01.896008 | orchestrator | 2026-01-03 04:24:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:01.896079 | orchestrator | 2026-01-03 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:04.945773 | orchestrator | 2026-01-03 04:24:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:04.948618 | orchestrator | 2026-01-03 04:24:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:04.948679 | orchestrator | 2026-01-03 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:08.000434 | orchestrator | 2026-01-03 04:24:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:08.005408 | orchestrator | 2026-01-03 04:24:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:08.005495 | orchestrator | 2026-01-03 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:11.051571 | orchestrator | 2026-01-03 04:24:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:11.054223 | orchestrator | 2026-01-03 04:24:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:11.054326 | orchestrator | 2026-01-03 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:14.097710 | orchestrator | 2026-01-03 04:24:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:14.099616 | orchestrator | 2026-01-03 04:24:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:14.100001 | orchestrator | 2026-01-03 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:17.151989 | orchestrator | 2026-01-03 04:24:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:17.152991 | orchestrator | 2026-01-03 04:24:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:17.153092 | orchestrator | 2026-01-03 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:20.198833 | orchestrator | 2026-01-03 04:24:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:20.200316 | orchestrator | 2026-01-03 04:24:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:20.200585 | orchestrator | 2026-01-03 04:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:23.250838 | orchestrator | 2026-01-03 04:24:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:23.253776 | orchestrator | 2026-01-03 04:24:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:23.253841 | orchestrator | 2026-01-03 04:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:26.299747 | orchestrator | 2026-01-03 04:24:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:26.302733 | orchestrator | 2026-01-03 04:24:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:26.302801 | orchestrator | 2026-01-03 04:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:29.348474 | orchestrator | 2026-01-03 04:24:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:29.350402 | orchestrator | 2026-01-03 04:24:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:29.350484 | orchestrator | 2026-01-03 04:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:32.397461 | orchestrator | 2026-01-03 04:24:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:32.399213 | orchestrator | 2026-01-03 04:24:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:32.399270 | orchestrator | 2026-01-03 04:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:35.438064 | orchestrator | 2026-01-03 04:24:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:35.439321 | orchestrator | 2026-01-03 04:24:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:35.439405 | orchestrator | 2026-01-03 04:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:38.484151 | orchestrator | 2026-01-03 04:24:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:38.486867 | orchestrator | 2026-01-03 04:24:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:38.486943 | orchestrator | 2026-01-03 04:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:41.533416 | orchestrator | 2026-01-03 04:24:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:41.535853 | orchestrator | 2026-01-03 04:24:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:41.535956 | orchestrator | 2026-01-03 04:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:44.581675 | orchestrator | 2026-01-03 04:24:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:44.583683 | orchestrator | 2026-01-03 04:24:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:44.583778 | orchestrator | 2026-01-03 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:47.629392 | orchestrator | 2026-01-03 04:24:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:47.630827 | orchestrator | 2026-01-03 04:24:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:47.630902 | orchestrator | 2026-01-03 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:50.675233 | orchestrator | 2026-01-03 04:24:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:50.677262 | orchestrator | 2026-01-03 04:24:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:50.677635 | orchestrator | 2026-01-03 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:53.718170 | orchestrator | 2026-01-03 04:24:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:53.718961 | orchestrator | 2026-01-03 04:24:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:53.719005 | orchestrator | 2026-01-03 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:56.764838 | orchestrator | 2026-01-03 04:24:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:56.768578 | orchestrator | 2026-01-03 04:24:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:56.768759 | orchestrator | 2026-01-03 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:59.814013 | orchestrator | 2026-01-03 04:24:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:24:59.815551 | orchestrator | 2026-01-03 04:24:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:24:59.815629 | orchestrator | 2026-01-03 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:02.862978 | orchestrator | 2026-01-03 04:25:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:02.864052 | orchestrator | 2026-01-03 04:25:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:02.864073 | orchestrator | 2026-01-03 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:05.911787 | orchestrator | 2026-01-03 04:25:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:05.913560 | orchestrator | 2026-01-03 04:25:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:05.913603 | orchestrator | 2026-01-03 04:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:08.959227 | orchestrator | 2026-01-03 04:25:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:08.962083 | orchestrator | 2026-01-03 04:25:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:08.962139 | orchestrator | 2026-01-03 04:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:12.007273 | orchestrator | 2026-01-03 04:25:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:12.007876 | orchestrator | 2026-01-03 04:25:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:12.007917 | orchestrator | 2026-01-03 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:15.056072 | orchestrator | 2026-01-03 04:25:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:15.058366 | orchestrator | 2026-01-03 04:25:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:15.058436 | orchestrator | 2026-01-03 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:18.106192 | orchestrator | 2026-01-03 04:25:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:18.108113 | orchestrator | 2026-01-03 04:25:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:18.108155 | orchestrator | 2026-01-03 04:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:21.150252 | orchestrator | 2026-01-03 04:25:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:21.151892 | orchestrator | 2026-01-03 04:25:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:21.151941 | orchestrator | 2026-01-03 04:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:24.196157 | orchestrator | 2026-01-03 04:25:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:24.197771 | orchestrator | 2026-01-03 04:25:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:24.197819 | orchestrator | 2026-01-03 04:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:27.237993 | orchestrator | 2026-01-03 04:25:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:27.240530 | orchestrator | 2026-01-03 04:25:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:27.240642 | orchestrator | 2026-01-03 04:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:30.283377 | orchestrator | 2026-01-03 04:25:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:30.283851 | orchestrator | 2026-01-03 04:25:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:30.283945 | orchestrator | 2026-01-03 04:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:33.332993 | orchestrator | 2026-01-03 04:25:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:33.334892 | orchestrator | 2026-01-03 04:25:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:33.335017 | orchestrator | 2026-01-03 04:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:36.380366 | orchestrator | 2026-01-03 04:25:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:36.383062 | orchestrator | 2026-01-03 04:25:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:36.383107 | orchestrator | 2026-01-03 04:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:39.427864 | orchestrator | 2026-01-03 04:25:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:39.429909 | orchestrator | 2026-01-03 04:25:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:39.430005 | orchestrator | 2026-01-03 04:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:42.476417 | orchestrator | 2026-01-03 04:25:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:42.477697 | orchestrator | 2026-01-03 04:25:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:42.477889 | orchestrator | 2026-01-03 04:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:45.524439 | orchestrator | 2026-01-03 04:25:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:45.527677 | orchestrator | 2026-01-03 04:25:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:45.527741 | orchestrator | 2026-01-03 04:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:48.577006 | orchestrator | 2026-01-03 04:25:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:48.578962 | orchestrator | 2026-01-03 04:25:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:48.579041 | orchestrator | 2026-01-03 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:51.623706 | orchestrator | 2026-01-03 04:25:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:51.625978 | orchestrator | 2026-01-03 04:25:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:51.626073 | orchestrator | 2026-01-03 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:54.670766 | orchestrator | 2026-01-03 04:25:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:54.672517 | orchestrator | 2026-01-03 04:25:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:54.672585 | orchestrator | 2026-01-03 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:57.718213 | orchestrator | 2026-01-03 04:25:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:25:57.720464 | orchestrator | 2026-01-03 04:25:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:25:57.720660 | orchestrator | 2026-01-03 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:00.764720 | orchestrator | 2026-01-03 04:26:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:00.766470 | orchestrator | 2026-01-03 04:26:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:00.766556 | orchestrator | 2026-01-03 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:03.813170 | orchestrator | 2026-01-03 04:26:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:03.815050 | orchestrator | 2026-01-03 04:26:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:03.815682 | orchestrator | 2026-01-03 04:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:06.862100 | orchestrator | 2026-01-03 04:26:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:06.863551 | orchestrator | 2026-01-03 04:26:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:06.863631 | orchestrator | 2026-01-03 04:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:09.909385 | orchestrator | 2026-01-03 04:26:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:09.911813 | orchestrator | 2026-01-03 04:26:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:09.911880 | orchestrator | 2026-01-03 04:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:12.956672 | orchestrator | 2026-01-03 04:26:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:12.958567 | orchestrator | 2026-01-03 04:26:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:12.958645 | orchestrator | 2026-01-03 04:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:16.006747 | orchestrator | 2026-01-03 04:26:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:16.008349 | orchestrator | 2026-01-03 04:26:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:16.008427 | orchestrator | 2026-01-03 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:19.061900 | orchestrator | 2026-01-03 04:26:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:19.063651 | orchestrator | 2026-01-03 04:26:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:19.063693 | orchestrator | 2026-01-03 04:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:22.102560 | orchestrator | 2026-01-03 04:26:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:22.103325 | orchestrator | 2026-01-03 04:26:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:22.103361 | orchestrator | 2026-01-03 04:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:25.143814 | orchestrator | 2026-01-03 04:26:25 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:25.144889 | orchestrator | 2026-01-03 04:26:25 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:25.144975 | orchestrator | 2026-01-03 04:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:28.191402 | orchestrator | 2026-01-03 04:26:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:28.193959 | orchestrator | 2026-01-03 04:26:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:28.194081 | orchestrator | 2026-01-03 04:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:31.252121 | orchestrator | 2026-01-03 04:26:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:31.253037 | orchestrator | 2026-01-03 04:26:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:31.253531 | orchestrator | 2026-01-03 04:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:34.300387 | orchestrator | 2026-01-03 04:26:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:34.302737 | orchestrator | 2026-01-03 04:26:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:34.302812 | orchestrator | 2026-01-03 04:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:37.343667 | orchestrator | 2026-01-03 04:26:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:37.345290 | orchestrator | 2026-01-03 04:26:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:37.345343 | orchestrator | 2026-01-03 04:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:40.392524 | orchestrator | 2026-01-03 04:26:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:40.394394 | orchestrator | 2026-01-03 04:26:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:40.394497 | orchestrator | 2026-01-03 04:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:43.440716 | orchestrator | 2026-01-03 04:26:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:43.442112 | orchestrator | 2026-01-03 04:26:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:43.442155 | orchestrator | 2026-01-03 04:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:46.484844 | orchestrator | 2026-01-03 04:26:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:46.487733 | orchestrator | 2026-01-03 04:26:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:46.487799 | orchestrator | 2026-01-03 04:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:49.535902 | orchestrator | 2026-01-03 04:26:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:49.538412 | orchestrator | 2026-01-03 04:26:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:49.538482 | orchestrator | 2026-01-03 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:52.579887 | orchestrator | 2026-01-03 04:26:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:52.581371 | orchestrator | 2026-01-03 04:26:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:52.581398 | orchestrator | 2026-01-03 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:55.625854 | orchestrator | 2026-01-03 04:26:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:55.627889 | orchestrator | 2026-01-03 04:26:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:55.628020 | orchestrator | 2026-01-03 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:58.674470 | orchestrator | 2026-01-03 04:26:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:26:58.676106 | orchestrator | 2026-01-03 04:26:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:26:58.676833 | orchestrator | 2026-01-03 04:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:01.713673 | orchestrator | 2026-01-03 04:27:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:01.715686 | orchestrator | 2026-01-03 04:27:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:01.715735 | orchestrator | 2026-01-03 04:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:04.758298 | orchestrator | 2026-01-03 04:27:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:04.759801 | orchestrator | 2026-01-03 04:27:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:04.759849 | orchestrator | 2026-01-03 04:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:07.802378 | orchestrator | 2026-01-03 04:27:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:07.805483 | orchestrator | 2026-01-03 04:27:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:07.805564 | orchestrator | 2026-01-03 04:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:10.856828 | orchestrator | 2026-01-03 04:27:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:10.858602 | orchestrator | 2026-01-03 04:27:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:10.858660 | orchestrator | 2026-01-03 04:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:13.907824 | orchestrator | 2026-01-03 04:27:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:13.909337 | orchestrator | 2026-01-03 04:27:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:13.909430 | orchestrator | 2026-01-03 04:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:16.954719 | orchestrator | 2026-01-03 04:27:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:16.959211 | orchestrator | 2026-01-03 04:27:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:16.959471 | orchestrator | 2026-01-03 04:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:20.015782 | orchestrator | 2026-01-03 04:27:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:20.017625 | orchestrator | 2026-01-03 04:27:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:20.017718 | orchestrator | 2026-01-03 04:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:23.067975 | orchestrator | 2026-01-03 04:27:23 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:23.071061 | orchestrator | 2026-01-03 04:27:23 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:23.071098 | orchestrator | 2026-01-03 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:26.117786 | orchestrator | 2026-01-03 04:27:26 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:26.118866 | orchestrator | 2026-01-03 04:27:26 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:26.119054 | orchestrator | 2026-01-03 04:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:29.167560 | orchestrator | 2026-01-03 04:27:29 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:29.168809 | orchestrator | 2026-01-03 04:27:29 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:29.168994 | orchestrator | 2026-01-03 04:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:32.221118 | orchestrator | 2026-01-03 04:27:32 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:32.223606 | orchestrator | 2026-01-03 04:27:32 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:32.223756 | orchestrator | 2026-01-03 04:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:35.270433 | orchestrator | 2026-01-03 04:27:35 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:35.271381 | orchestrator | 2026-01-03 04:27:35 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:35.271459 | orchestrator | 2026-01-03 04:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:38.315761 | orchestrator | 2026-01-03 04:27:38 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:38.317049 | orchestrator | 2026-01-03 04:27:38 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:38.317462 | orchestrator | 2026-01-03 04:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:41.364317 | orchestrator | 2026-01-03 04:27:41 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:41.366441 | orchestrator | 2026-01-03 04:27:41 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:41.366515 | orchestrator | 2026-01-03 04:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:44.408211 | orchestrator | 2026-01-03 04:27:44 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:44.410216 | orchestrator | 2026-01-03 04:27:44 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:44.410272 | orchestrator | 2026-01-03 04:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:47.452682 | orchestrator | 2026-01-03 04:27:47 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:47.455317 | orchestrator | 2026-01-03 04:27:47 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:47.455462 | orchestrator | 2026-01-03 04:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:50.503228 | orchestrator | 2026-01-03 04:27:50 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:50.504290 | orchestrator | 2026-01-03 04:27:50 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:50.504336 | orchestrator | 2026-01-03 04:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:53.548046 | orchestrator | 2026-01-03 04:27:53 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:53.550281 | orchestrator | 2026-01-03 04:27:53 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:53.550408 | orchestrator | 2026-01-03 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:56.597476 | orchestrator | 2026-01-03 04:27:56 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:56.600144 | orchestrator | 2026-01-03 04:27:56 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:56.600221 | orchestrator | 2026-01-03 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:59.647650 | orchestrator | 2026-01-03 04:27:59 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:27:59.648483 | orchestrator | 2026-01-03 04:27:59 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:27:59.648722 | orchestrator | 2026-01-03 04:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:02.697566 | orchestrator | 2026-01-03 04:28:02 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:02.700554 | orchestrator | 2026-01-03 04:28:02 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:02.700785 | orchestrator | 2026-01-03 04:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:05.749249 | orchestrator | 2026-01-03 04:28:05 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:05.750703 | orchestrator | 2026-01-03 04:28:05 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:05.750746 | orchestrator | 2026-01-03 04:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:08.796227 | orchestrator | 2026-01-03 04:28:08 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:08.797810 | orchestrator | 2026-01-03 04:28:08 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:08.797882 | orchestrator | 2026-01-03 04:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:11.844970 | orchestrator | 2026-01-03 04:28:11 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:11.846382 | orchestrator | 2026-01-03 04:28:11 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:11.846696 | orchestrator | 2026-01-03 04:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:14.892275 | orchestrator | 2026-01-03 04:28:14 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:14.893861 | orchestrator | 2026-01-03 04:28:14 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:14.893974 | orchestrator | 2026-01-03 04:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:17.939369 | orchestrator | 2026-01-03 04:28:17 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:17.941489 | orchestrator | 2026-01-03 04:28:17 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:17.941547 | orchestrator | 2026-01-03 04:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:20.986688 | orchestrator | 2026-01-03 04:28:20 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:20.988902 | orchestrator | 2026-01-03 04:28:20 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:20.989123 | orchestrator | 2026-01-03 04:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:24.044962 | orchestrator | 2026-01-03 04:28:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:24.046514 | orchestrator | 2026-01-03 04:28:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:24.046743 | orchestrator | 2026-01-03 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:27.095574 | orchestrator | 2026-01-03 04:28:27 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:27.097429 | orchestrator | 2026-01-03 04:28:27 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:27.097514 | orchestrator | 2026-01-03 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:30.143357 | orchestrator | 2026-01-03 04:28:30 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:30.145086 | orchestrator | 2026-01-03 04:28:30 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:30.145126 | orchestrator | 2026-01-03 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:33.192949 | orchestrator | 2026-01-03 04:28:33 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:33.194619 | orchestrator | 2026-01-03 04:28:33 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:33.194677 | orchestrator | 2026-01-03 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:36.241591 | orchestrator | 2026-01-03 04:28:36 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:36.242443 | orchestrator | 2026-01-03 04:28:36 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:36.242502 | orchestrator | 2026-01-03 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:39.284712 | orchestrator | 2026-01-03 04:28:39 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:39.287119 | orchestrator | 2026-01-03 04:28:39 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:39.287239 | orchestrator | 2026-01-03 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:42.325927 | orchestrator | 2026-01-03 04:28:42 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:42.327335 | orchestrator | 2026-01-03 04:28:42 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:42.327672 | orchestrator | 2026-01-03 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:45.368860 | orchestrator | 2026-01-03 04:28:45 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:45.370230 | orchestrator | 2026-01-03 04:28:45 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:45.370303 | orchestrator | 2026-01-03 04:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:48.413310 | orchestrator | 2026-01-03 04:28:48 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:48.415667 | orchestrator | 2026-01-03 04:28:48 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:48.415726 | orchestrator | 2026-01-03 04:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:51.459232 | orchestrator | 2026-01-03 04:28:51 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:51.461499 | orchestrator | 2026-01-03 04:28:51 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:51.461536 | orchestrator | 2026-01-03 04:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:54.505581 | orchestrator | 2026-01-03 04:28:54 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:54.507454 | orchestrator | 2026-01-03 04:28:54 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:54.507534 | orchestrator | 2026-01-03 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:57.557188 | orchestrator | 2026-01-03 04:28:57 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:28:57.559483 | orchestrator | 2026-01-03 04:28:57 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:28:57.559526 | orchestrator | 2026-01-03 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:00.607701 | orchestrator | 2026-01-03 04:29:00 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:00.609714 | orchestrator | 2026-01-03 04:29:00 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:00.609927 | orchestrator | 2026-01-03 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:03.650141 | orchestrator | 2026-01-03 04:29:03 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:03.652387 | orchestrator | 2026-01-03 04:29:03 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:03.652451 | orchestrator | 2026-01-03 04:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:06.700619 | orchestrator | 2026-01-03 04:29:06 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:06.704200 | orchestrator | 2026-01-03 04:29:06 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:06.704279 | orchestrator | 2026-01-03 04:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:09.747743 | orchestrator | 2026-01-03 04:29:09 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:09.749789 | orchestrator | 2026-01-03 04:29:09 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:09.749865 | orchestrator | 2026-01-03 04:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:12.795541 | orchestrator | 2026-01-03 04:29:12 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:12.797278 | orchestrator | 2026-01-03 04:29:12 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:12.797431 | orchestrator | 2026-01-03 04:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:15.845602 | orchestrator | 2026-01-03 04:29:15 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:15.847852 | orchestrator | 2026-01-03 04:29:15 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:15.847934 | orchestrator | 2026-01-03 04:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:18.895175 | orchestrator | 2026-01-03 04:29:18 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:18.896719 | orchestrator | 2026-01-03 04:29:18 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:18.896800 | orchestrator | 2026-01-03 04:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:21.941398 | orchestrator | 2026-01-03 04:29:21 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:21.943406 | orchestrator | 2026-01-03 04:29:21 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:21.943452 | orchestrator | 2026-01-03 04:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:24.990695 | orchestrator | 2026-01-03 04:29:24 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:24.992330 | orchestrator | 2026-01-03 04:29:24 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:24.992386 | orchestrator | 2026-01-03 04:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:28.036256 | orchestrator | 2026-01-03 04:29:28 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:28.037171 | orchestrator | 2026-01-03 04:29:28 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:28.037220 | orchestrator | 2026-01-03 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:31.078749 | orchestrator | 2026-01-03 04:29:31 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:31.078992 | orchestrator | 2026-01-03 04:29:31 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:31.079042 | orchestrator | 2026-01-03 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:34.125714 | orchestrator | 2026-01-03 04:29:34 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:34.127581 | orchestrator | 2026-01-03 04:29:34 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:34.127697 | orchestrator | 2026-01-03 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:37.175721 | orchestrator | 2026-01-03 04:29:37 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:37.176795 | orchestrator | 2026-01-03 04:29:37 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:37.177145 | orchestrator | 2026-01-03 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:40.226754 | orchestrator | 2026-01-03 04:29:40 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:40.229476 | orchestrator | 2026-01-03 04:29:40 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:40.229648 | orchestrator | 2026-01-03 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:43.267736 | orchestrator | 2026-01-03 04:29:43 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:43.269589 | orchestrator | 2026-01-03 04:29:43 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:43.269649 | orchestrator | 2026-01-03 04:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:46.310582 | orchestrator | 2026-01-03 04:29:46 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:46.311969 | orchestrator | 2026-01-03 04:29:46 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:46.312171 | orchestrator | 2026-01-03 04:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:49.355578 | orchestrator | 2026-01-03 04:29:49 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:49.357044 | orchestrator | 2026-01-03 04:29:49 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:49.357138 | orchestrator | 2026-01-03 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:52.403926 | orchestrator | 2026-01-03 04:29:52 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:52.405167 | orchestrator | 2026-01-03 04:29:52 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:52.405196 | orchestrator | 2026-01-03 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:55.449470 | orchestrator | 2026-01-03 04:29:55 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:55.451343 | orchestrator | 2026-01-03 04:29:55 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:55.451427 | orchestrator | 2026-01-03 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:58.497301 | orchestrator | 2026-01-03 04:29:58 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:29:58.499889 | orchestrator | 2026-01-03 04:29:58 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:29:58.499934 | orchestrator | 2026-01-03 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:01.549804 | orchestrator | 2026-01-03 04:30:01 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:01.552019 | orchestrator | 2026-01-03 04:30:01 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:01.552140 | orchestrator | 2026-01-03 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:04.603058 | orchestrator | 2026-01-03 04:30:04 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:04.603849 | orchestrator | 2026-01-03 04:30:04 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:04.604490 | orchestrator | 2026-01-03 04:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:07.657679 | orchestrator | 2026-01-03 04:30:07 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:07.659655 | orchestrator | 2026-01-03 04:30:07 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:07.659720 | orchestrator | 2026-01-03 04:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:10.709649 | orchestrator | 2026-01-03 04:30:10 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:10.710689 | orchestrator | 2026-01-03 04:30:10 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:10.710736 | orchestrator | 2026-01-03 04:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:13.758198 | orchestrator | 2026-01-03 04:30:13 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:13.759836 | orchestrator | 2026-01-03 04:30:13 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:13.759935 | orchestrator | 2026-01-03 04:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:16.810505 | orchestrator | 2026-01-03 04:30:16 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:16.812556 | orchestrator | 2026-01-03 04:30:16 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:16.812771 | orchestrator | 2026-01-03 04:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:19.864474 | orchestrator | 2026-01-03 04:30:19 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:19.866224 | orchestrator | 2026-01-03 04:30:19 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:19.866306 | orchestrator | 2026-01-03 04:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:22.926050 | orchestrator | 2026-01-03 04:30:22 | INFO  | Task b16f335a-ddd7-42d6-ae3e-bcacbe9793fa is in state STARTED 2026-01-03 04:30:22.926170 | orchestrator | 2026-01-03 04:30:22 | INFO  | Task 80634f4e-7557-474f-b43a-6fc42f9dfcdb is in state STARTED 2026-01-03 04:30:22.926178 | orchestrator | 2026-01-03 04:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:24.671458 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-03 04:30:24.676466 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-03 04:30:25.559010 | 2026-01-03 04:30:25.559181 | PLAY [Post output play] 2026-01-03 04:30:25.579952 | 2026-01-03 04:30:25.580135 | LOOP [stage-output : Register sources] 2026-01-03 04:30:25.648489 | 2026-01-03 04:30:25.648762 | TASK [stage-output : Check sudo] 2026-01-03 04:30:26.572996 | orchestrator | sudo: a password is required 2026-01-03 04:30:26.716164 | orchestrator | ok: Runtime: 0:00:00.011788 2026-01-03 04:30:26.732364 | 2026-01-03 04:30:26.732558 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-03 04:30:26.764905 | 2026-01-03 04:30:26.765119 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-03 04:30:26.826956 | orchestrator | ok 2026-01-03 04:30:26.833246 | 2026-01-03 04:30:26.833379 | LOOP [stage-output : Ensure target folders exist] 2026-01-03 04:30:27.290931 | orchestrator | ok: "docs" 2026-01-03 04:30:27.291200 | 2026-01-03 04:30:27.549792 | orchestrator | ok: "artifacts" 2026-01-03 04:30:27.809910 | orchestrator | ok: "logs" 2026-01-03 04:30:27.825844 | 2026-01-03 04:30:27.825995 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-03 04:30:27.866584 | 2026-01-03 04:30:27.866957 | TASK [stage-output : Make all log files readable] 2026-01-03 04:30:28.212425 | orchestrator | ok 2026-01-03 04:30:28.220828 | 2026-01-03 04:30:28.220965 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-03 04:30:28.255956 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.264309 | 2026-01-03 04:30:28.264439 | TASK [stage-output : Discover log files for compression] 2026-01-03 04:30:28.298935 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.306273 | 2026-01-03 04:30:28.306403 | LOOP [stage-output : Archive everything from logs] 2026-01-03 04:30:28.345893 | 2026-01-03 04:30:28.346079 | PLAY [Post cleanup play] 2026-01-03 04:30:28.354138 | 2026-01-03 04:30:28.354262 | TASK [Set cloud fact (Zuul deployment)] 2026-01-03 04:30:28.405036 | orchestrator | ok 2026-01-03 04:30:28.413799 | 2026-01-03 04:30:28.413925 | TASK [Set cloud fact (local deployment)] 2026-01-03 04:30:28.437598 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.445747 | 2026-01-03 04:30:28.445880 | TASK [Clean the cloud environment] 2026-01-03 04:30:31.312546 | orchestrator | 2026-01-03 04:30:31 - clean up servers 2026-01-03 04:30:32.208511 | orchestrator | 2026-01-03 04:30:32 - testbed-manager 2026-01-03 04:30:32.307604 | orchestrator | 2026-01-03 04:30:32 - testbed-node-0 2026-01-03 04:30:32.401994 | orchestrator | 2026-01-03 04:30:32 - testbed-node-5 2026-01-03 04:30:32.495213 | orchestrator | 2026-01-03 04:30:32 - testbed-node-2 2026-01-03 04:30:32.594035 | orchestrator | 2026-01-03 04:30:32 - testbed-node-3 2026-01-03 04:30:32.697757 | orchestrator | 2026-01-03 04:30:32 - testbed-node-4 2026-01-03 04:30:32.787540 | orchestrator | 2026-01-03 04:30:32 - testbed-node-1 2026-01-03 04:30:32.896479 | orchestrator | 2026-01-03 04:30:32 - clean up keypairs 2026-01-03 04:30:32.917900 | orchestrator | 2026-01-03 04:30:32 - testbed 2026-01-03 04:30:32.944546 | orchestrator | 2026-01-03 04:30:32 - wait for servers to be gone 2026-01-03 04:30:50.685959 | orchestrator | 2026-01-03 04:30:50 - clean up ports 2026-01-03 04:30:50.877510 | orchestrator | 2026-01-03 04:30:50 - 6ec9e1b1-201d-489e-b84b-6bcb14e2eb54 2026-01-03 04:30:51.170207 | orchestrator | 2026-01-03 04:30:51 - ba80ce6d-322e-4824-b721-4e87465eefa4 2026-01-03 04:30:51.792423 | orchestrator | 2026-01-03 04:30:51 - dd0adf9e-f6c6-4f87-9cd0-9f206d233ce4 2026-01-03 04:30:52.096096 | orchestrator | 2026-01-03 04:30:52 - e399d560-f444-4957-a3ca-0b9266d3839d 2026-01-03 04:30:52.356404 | orchestrator | 2026-01-03 04:30:52 - e3e70582-3db2-42af-b235-f0e105bfe1ee 2026-01-03 04:30:52.588632 | orchestrator | 2026-01-03 04:30:52 - e4b026a2-acc0-47d3-93b1-cf4477c3bbb7 2026-01-03 04:30:52.854671 | orchestrator | 2026-01-03 04:30:52 - ebb5b7a7-6723-4886-b84d-52ba5be82e2a 2026-01-03 04:30:53.128143 | orchestrator | 2026-01-03 04:30:53 - clean up volumes 2026-01-03 04:30:53.229590 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-2-node-base 2026-01-03 04:30:53.279726 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-0-node-base 2026-01-03 04:30:53.327765 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-5-node-base 2026-01-03 04:30:53.374998 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-3-node-base 2026-01-03 04:30:53.426496 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-4-node-base 2026-01-03 04:30:53.473117 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-1-node-base 2026-01-03 04:30:53.522624 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-manager-base 2026-01-03 04:30:53.578156 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-6-node-3 2026-01-03 04:30:53.622945 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-0-node-3 2026-01-03 04:30:53.667196 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-7-node-4 2026-01-03 04:30:53.717688 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-2-node-5 2026-01-03 04:30:53.758894 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-4-node-4 2026-01-03 04:30:53.802152 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-8-node-5 2026-01-03 04:30:53.853953 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-1-node-4 2026-01-03 04:30:53.901094 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-3-node-3 2026-01-03 04:30:53.944673 | orchestrator | 2026-01-03 04:30:53 - testbed-volume-5-node-5 2026-01-03 04:30:53.996734 | orchestrator | 2026-01-03 04:30:53 - disconnect routers 2026-01-03 04:30:54.159324 | orchestrator | 2026-01-03 04:30:54 - testbed 2026-01-03 04:30:55.764487 | orchestrator | 2026-01-03 04:30:55 - clean up subnets 2026-01-03 04:30:55.826121 | orchestrator | 2026-01-03 04:30:55 - subnet-testbed-management 2026-01-03 04:30:55.992580 | orchestrator | 2026-01-03 04:30:55 - clean up networks 2026-01-03 04:30:56.182896 | orchestrator | 2026-01-03 04:30:56 - net-testbed-management 2026-01-03 04:30:56.558882 | orchestrator | 2026-01-03 04:30:56 - clean up security groups 2026-01-03 04:30:56.602957 | orchestrator | 2026-01-03 04:30:56 - testbed-management 2026-01-03 04:30:56.729375 | orchestrator | 2026-01-03 04:30:56 - testbed-node 2026-01-03 04:30:56.868826 | orchestrator | 2026-01-03 04:30:56 - clean up floating ips 2026-01-03 04:30:56.903162 | orchestrator | 2026-01-03 04:30:56 - 81.163.192.133 2026-01-03 04:30:57.306753 | orchestrator | 2026-01-03 04:30:57 - clean up routers 2026-01-03 04:30:57.448455 | orchestrator | 2026-01-03 04:30:57 - testbed 2026-01-03 04:30:59.008630 | orchestrator | ok: Runtime: 0:00:30.203531 2026-01-03 04:30:59.011891 | 2026-01-03 04:30:59.012033 | PLAY RECAP 2026-01-03 04:30:59.012133 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-03 04:30:59.012180 | 2026-01-03 04:30:59.168328 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-03 04:30:59.169391 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-03 04:30:59.938752 | 2026-01-03 04:30:59.938962 | PLAY [Cleanup play] 2026-01-03 04:30:59.956452 | 2026-01-03 04:30:59.956610 | TASK [Set cloud fact (Zuul deployment)] 2026-01-03 04:31:00.017495 | orchestrator | ok 2026-01-03 04:31:00.029205 | 2026-01-03 04:31:00.029394 | TASK [Set cloud fact (local deployment)] 2026-01-03 04:31:00.065920 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:00.084034 | 2026-01-03 04:31:00.084222 | TASK [Clean the cloud environment] 2026-01-03 04:31:01.228296 | orchestrator | 2026-01-03 04:31:01 - clean up servers 2026-01-03 04:31:01.827290 | orchestrator | 2026-01-03 04:31:01 - clean up keypairs 2026-01-03 04:31:01.847581 | orchestrator | 2026-01-03 04:31:01 - wait for servers to be gone 2026-01-03 04:31:01.888499 | orchestrator | 2026-01-03 04:31:01 - clean up ports 2026-01-03 04:31:01.962445 | orchestrator | 2026-01-03 04:31:01 - clean up volumes 2026-01-03 04:31:02.028457 | orchestrator | 2026-01-03 04:31:02 - disconnect routers 2026-01-03 04:31:02.060617 | orchestrator | 2026-01-03 04:31:02 - clean up subnets 2026-01-03 04:31:02.080141 | orchestrator | 2026-01-03 04:31:02 - clean up networks 2026-01-03 04:31:02.262638 | orchestrator | 2026-01-03 04:31:02 - clean up security groups 2026-01-03 04:31:02.305675 | orchestrator | 2026-01-03 04:31:02 - clean up floating ips 2026-01-03 04:31:02.329104 | orchestrator | 2026-01-03 04:31:02 - clean up routers 2026-01-03 04:31:02.629103 | orchestrator | ok: Runtime: 0:00:01.492717 2026-01-03 04:31:02.632861 | 2026-01-03 04:31:02.633047 | PLAY RECAP 2026-01-03 04:31:02.633179 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-03 04:31:02.633245 | 2026-01-03 04:31:02.781431 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-03 04:31:02.782545 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-03 04:31:03.653318 | 2026-01-03 04:31:03.653490 | PLAY [Base post-fetch] 2026-01-03 04:31:03.677851 | 2026-01-03 04:31:03.678051 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-03 04:31:03.745020 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:03.755724 | 2026-01-03 04:31:03.755910 | TASK [fetch-output : Set log path for single node] 2026-01-03 04:31:03.804735 | orchestrator | ok 2026-01-03 04:31:03.814411 | 2026-01-03 04:31:03.814567 | LOOP [fetch-output : Ensure local output dirs] 2026-01-03 04:31:04.321980 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/logs" 2026-01-03 04:31:04.601213 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/artifacts" 2026-01-03 04:31:04.918825 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7cea1800cefa4460941b05e9a4d84b02/work/docs" 2026-01-03 04:31:04.945145 | 2026-01-03 04:31:04.945302 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-03 04:31:05.897882 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:05.898292 | orchestrator | changed: All items complete 2026-01-03 04:31:05.898351 | 2026-01-03 04:31:06.645078 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:07.339906 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:07.359969 | 2026-01-03 04:31:07.360114 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-03 04:31:07.397556 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:07.401735 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:07.420207 | 2026-01-03 04:31:07.420354 | PLAY RECAP 2026-01-03 04:31:07.420464 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-03 04:31:07.420508 | 2026-01-03 04:31:07.569104 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-03 04:31:07.570205 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-03 04:31:08.330694 | 2026-01-03 04:31:08.330902 | PLAY [Base post] 2026-01-03 04:31:08.345766 | 2026-01-03 04:31:08.345940 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-03 04:31:09.314197 | orchestrator | changed 2026-01-03 04:31:09.326516 | 2026-01-03 04:31:09.326661 | PLAY RECAP 2026-01-03 04:31:09.326748 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-03 04:31:09.326814 | 2026-01-03 04:31:09.461801 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-03 04:31:09.462930 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-03 04:31:10.402376 | 2026-01-03 04:31:10.402623 | PLAY [Base post-logs] 2026-01-03 04:31:10.425864 | 2026-01-03 04:31:10.426042 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-03 04:31:10.948039 | localhost | changed 2026-01-03 04:31:10.967419 | 2026-01-03 04:31:10.967831 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-03 04:31:11.008520 | localhost | ok 2026-01-03 04:31:11.014697 | 2026-01-03 04:31:11.014971 | TASK [Set zuul-log-path fact] 2026-01-03 04:31:11.044319 | localhost | ok 2026-01-03 04:31:11.063231 | 2026-01-03 04:31:11.063450 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-03 04:31:11.102871 | localhost | ok 2026-01-03 04:31:11.111370 | 2026-01-03 04:31:11.111537 | TASK [upload-logs : Create log directories] 2026-01-03 04:31:11.685592 | localhost | changed 2026-01-03 04:31:11.695715 | 2026-01-03 04:31:11.696116 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-03 04:31:12.316263 | localhost -> localhost | ok: Runtime: 0:00:00.006424 2026-01-03 04:31:12.322568 | 2026-01-03 04:31:12.322885 | TASK [upload-logs : Upload logs to log server] 2026-01-03 04:31:12.907645 | localhost | Output suppressed because no_log was given 2026-01-03 04:31:12.909772 | 2026-01-03 04:31:12.909890 | LOOP [upload-logs : Compress console log and json output] 2026-01-03 04:31:12.962361 | localhost | skipping: Conditional result was False 2026-01-03 04:31:12.968604 | localhost | skipping: Conditional result was False 2026-01-03 04:31:12.980142 | 2026-01-03 04:31:12.980339 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-03 04:31:13.028041 | localhost | skipping: Conditional result was False 2026-01-03 04:31:13.028481 | 2026-01-03 04:31:13.033186 | localhost | skipping: Conditional result was False 2026-01-03 04:31:13.044543 | 2026-01-03 04:31:13.044762 | LOOP [upload-logs : Upload console log and json output]