2025-07-27 00:00:07.111623 | Job console starting 2025-07-27 00:00:07.128207 | Updating git repos 2025-07-27 00:00:07.196007 | Cloning repos into workspace 2025-07-27 00:00:07.375329 | Restoring repo states 2025-07-27 00:00:07.406615 | Merging changes 2025-07-27 00:00:07.406646 | Checking out repos 2025-07-27 00:00:07.805037 | Preparing playbooks 2025-07-27 00:00:08.550569 | Running Ansible setup 2025-07-27 00:00:14.037174 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-27 00:00:15.848505 | 2025-07-27 00:00:15.848649 | PLAY [Base pre] 2025-07-27 00:00:15.880613 | 2025-07-27 00:00:15.880752 | TASK [Setup log path fact] 2025-07-27 00:00:15.939305 | orchestrator | ok 2025-07-27 00:00:16.005762 | 2025-07-27 00:00:16.013603 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-27 00:00:16.092921 | orchestrator | ok 2025-07-27 00:00:16.113446 | 2025-07-27 00:00:16.113558 | TASK [emit-job-header : Print job information] 2025-07-27 00:00:16.152926 | # Job Information 2025-07-27 00:00:16.153082 | Ansible Version: 2.16.14 2025-07-27 00:00:16.153116 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-27 00:00:16.153149 | Pipeline: periodic-midnight 2025-07-27 00:00:16.153171 | Executor: 521e9411259a 2025-07-27 00:00:16.153192 | Triggered by: https://github.com/osism/testbed 2025-07-27 00:00:16.153213 | Event ID: c857245ace664fbc817da90b99c3dedc 2025-07-27 00:00:16.159597 | 2025-07-27 00:00:16.159718 | LOOP [emit-job-header : Print node information] 2025-07-27 00:00:16.392653 | orchestrator | ok: 2025-07-27 00:00:16.393821 | orchestrator | # Node Information 2025-07-27 00:00:16.393872 | orchestrator | Inventory Hostname: orchestrator 2025-07-27 00:00:16.393899 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-27 00:00:16.393920 | orchestrator | Username: zuul-testbed02 2025-07-27 00:00:16.393942 | orchestrator | Distro: Debian 12.11 2025-07-27 00:00:16.393965 | orchestrator | Provider: static-testbed 2025-07-27 00:00:16.393987 | orchestrator | Region: 2025-07-27 00:00:16.394008 | orchestrator | Label: testbed-orchestrator 2025-07-27 00:00:16.394028 | orchestrator | Product Name: OpenStack Nova 2025-07-27 00:00:16.394048 | orchestrator | Interface IP: 81.163.193.140 2025-07-27 00:00:16.426252 | 2025-07-27 00:00:16.426366 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-27 00:00:17.199882 | orchestrator -> localhost | changed 2025-07-27 00:00:17.206504 | 2025-07-27 00:00:17.206605 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-27 00:00:19.923955 | orchestrator -> localhost | changed 2025-07-27 00:00:19.951300 | 2025-07-27 00:00:19.951403 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-27 00:00:20.358335 | orchestrator -> localhost | ok 2025-07-27 00:00:20.364191 | 2025-07-27 00:00:20.364286 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-27 00:00:20.382433 | orchestrator | ok 2025-07-27 00:00:20.403883 | orchestrator | included: /var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-27 00:00:20.410289 | 2025-07-27 00:00:20.410365 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-27 00:00:23.592787 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-27 00:00:23.592948 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/a37dd791e3bb41e7ac00ecb3823bd6ca_id_rsa 2025-07-27 00:00:23.592979 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/a37dd791e3bb41e7ac00ecb3823bd6ca_id_rsa.pub 2025-07-27 00:00:23.593001 | orchestrator -> localhost | The key fingerprint is: 2025-07-27 00:00:23.593020 | orchestrator -> localhost | SHA256:j027ra+gRKYbOYSejQzXn0sw2guYWzxwK78mZYUrmqE zuul-build-sshkey 2025-07-27 00:00:23.593037 | orchestrator -> localhost | The key's randomart image is: 2025-07-27 00:00:23.593069 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-27 00:00:23.593088 | orchestrator -> localhost | | | 2025-07-27 00:00:23.593106 | orchestrator -> localhost | | | 2025-07-27 00:00:23.593123 | orchestrator -> localhost | | . | 2025-07-27 00:00:23.593139 | orchestrator -> localhost | | .o. | 2025-07-27 00:00:23.593154 | orchestrator -> localhost | |o +o= o S . | 2025-07-27 00:00:23.593177 | orchestrator -> localhost | |o@+O O . = . | 2025-07-27 00:00:23.593194 | orchestrator -> localhost | |**% B = o + | 2025-07-27 00:00:23.593210 | orchestrator -> localhost | |E=.o B o . o | 2025-07-27 00:00:23.593227 | orchestrator -> localhost | |.oo.o o ++o | 2025-07-27 00:00:23.593244 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-27 00:00:23.593285 | orchestrator -> localhost | ok: Runtime: 0:00:02.570593 2025-07-27 00:00:23.599326 | 2025-07-27 00:00:23.599405 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-27 00:00:23.656668 | orchestrator | ok 2025-07-27 00:00:23.674154 | orchestrator | included: /var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-27 00:00:23.696233 | 2025-07-27 00:00:23.696334 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-27 00:00:23.708980 | orchestrator | skipping: Conditional result was False 2025-07-27 00:00:23.725251 | 2025-07-27 00:00:23.725343 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-27 00:00:24.310183 | orchestrator | changed 2025-07-27 00:00:24.315926 | 2025-07-27 00:00:24.316009 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-27 00:00:24.616801 | orchestrator | ok 2025-07-27 00:00:24.621888 | 2025-07-27 00:00:24.621965 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-27 00:00:25.331315 | orchestrator | ok 2025-07-27 00:00:25.337053 | 2025-07-27 00:00:25.337156 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-27 00:00:25.808194 | orchestrator | ok 2025-07-27 00:00:25.817144 | 2025-07-27 00:00:25.817250 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-27 00:00:25.864515 | orchestrator | skipping: Conditional result was False 2025-07-27 00:00:25.870437 | 2025-07-27 00:00:25.870541 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-27 00:00:26.452587 | orchestrator -> localhost | changed 2025-07-27 00:00:26.479128 | 2025-07-27 00:00:26.479230 | TASK [add-build-sshkey : Add back temp key] 2025-07-27 00:00:27.012772 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/a37dd791e3bb41e7ac00ecb3823bd6ca_id_rsa (zuul-build-sshkey) 2025-07-27 00:00:27.012952 | orchestrator -> localhost | ok: Runtime: 0:00:00.029553 2025-07-27 00:00:27.018683 | 2025-07-27 00:00:27.018760 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-27 00:00:27.573441 | orchestrator | ok 2025-07-27 00:00:27.578403 | 2025-07-27 00:00:27.578481 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-27 00:00:27.611309 | orchestrator | skipping: Conditional result was False 2025-07-27 00:00:27.671802 | 2025-07-27 00:00:27.671897 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-27 00:00:28.130014 | orchestrator | ok 2025-07-27 00:00:28.149286 | 2025-07-27 00:00:28.149388 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-27 00:00:28.212888 | orchestrator | ok 2025-07-27 00:00:28.236238 | 2025-07-27 00:00:28.236345 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-27 00:00:28.999354 | orchestrator -> localhost | ok 2025-07-27 00:00:29.005467 | 2025-07-27 00:00:29.005550 | TASK [validate-host : Collect information about the host] 2025-07-27 00:00:30.405769 | orchestrator | ok 2025-07-27 00:00:30.433360 | 2025-07-27 00:00:30.433473 | TASK [validate-host : Sanitize hostname] 2025-07-27 00:00:30.533410 | orchestrator | ok 2025-07-27 00:00:30.537868 | 2025-07-27 00:00:30.537946 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-27 00:00:31.703281 | orchestrator -> localhost | changed 2025-07-27 00:00:31.708710 | 2025-07-27 00:00:31.708843 | TASK [validate-host : Collect information about zuul worker] 2025-07-27 00:00:32.382017 | orchestrator | ok 2025-07-27 00:00:32.386372 | 2025-07-27 00:00:32.386458 | TASK [validate-host : Write out all zuul information for each host] 2025-07-27 00:00:33.102312 | orchestrator -> localhost | changed 2025-07-27 00:00:33.110745 | 2025-07-27 00:00:33.110827 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-27 00:00:33.387496 | orchestrator | ok 2025-07-27 00:00:33.403136 | 2025-07-27 00:00:33.403227 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-27 00:01:15.724724 | orchestrator | changed: 2025-07-27 00:01:15.724958 | orchestrator | .d..t...... src/ 2025-07-27 00:01:15.724995 | orchestrator | .d..t...... src/github.com/ 2025-07-27 00:01:15.725020 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-27 00:01:15.725043 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-27 00:01:15.725064 | orchestrator | RedHat.yml 2025-07-27 00:01:15.753937 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-27 00:01:15.753955 | orchestrator | RedHat.yml 2025-07-27 00:01:15.754009 | orchestrator | = 1.53.0"... 2025-07-27 00:01:29.098847 | orchestrator | 00:01:29.098 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-27 00:01:29.129064 | orchestrator | 00:01:29.128 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-27 00:01:29.661705 | orchestrator | 00:01:29.661 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-27 00:01:30.595111 | orchestrator | 00:01:30.594 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-27 00:01:30.674412 | orchestrator | 00:01:30.674 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-27 00:01:31.159156 | orchestrator | 00:01:31.158 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-27 00:01:31.236362 | orchestrator | 00:01:31.236 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-27 00:01:31.720020 | orchestrator | 00:01:31.718 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-27 00:01:31.720104 | orchestrator | 00:01:31.718 STDOUT terraform: Providers are signed by their developers. 2025-07-27 00:01:31.720117 | orchestrator | 00:01:31.718 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-27 00:01:31.720128 | orchestrator | 00:01:31.718 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-27 00:01:31.720138 | orchestrator | 00:01:31.718 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-27 00:01:31.720153 | orchestrator | 00:01:31.718 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-27 00:01:31.720167 | orchestrator | 00:01:31.719 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-27 00:01:31.720177 | orchestrator | 00:01:31.719 STDOUT terraform: you run "tofu init" in the future. 2025-07-27 00:01:31.720187 | orchestrator | 00:01:31.719 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-27 00:01:31.720197 | orchestrator | 00:01:31.719 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-27 00:01:31.720251 | orchestrator | 00:01:31.719 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-27 00:01:31.720262 | orchestrator | 00:01:31.719 STDOUT terraform: should now work. 2025-07-27 00:01:31.720272 | orchestrator | 00:01:31.719 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-27 00:01:31.720282 | orchestrator | 00:01:31.719 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-27 00:01:31.720293 | orchestrator | 00:01:31.719 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-27 00:01:31.847490 | orchestrator | 00:01:31.847 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-27 00:01:31.847705 | orchestrator | 00:01:31.847 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-27 00:01:32.051540 | orchestrator | 00:01:32.051 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-27 00:01:32.051624 | orchestrator | 00:01:32.051 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-27 00:01:32.051636 | orchestrator | 00:01:32.051 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-27 00:01:32.051644 | orchestrator | 00:01:32.051 STDOUT terraform: for this configuration. 2025-07-27 00:01:32.208669 | orchestrator | 00:01:32.208 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-27 00:01:32.208772 | orchestrator | 00:01:32.208 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-27 00:01:32.320334 | orchestrator | 00:01:32.320 STDOUT terraform: ci.auto.tfvars 2025-07-27 00:01:32.323491 | orchestrator | 00:01:32.323 STDOUT terraform: default_custom.tf 2025-07-27 00:01:32.441965 | orchestrator | 00:01:32.441 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-27 00:01:33.528285 | orchestrator | 00:01:33.527 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-27 00:01:34.064963 | orchestrator | 00:01:34.064 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-27 00:01:34.390896 | orchestrator | 00:01:34.390 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-27 00:01:34.390979 | orchestrator | 00:01:34.390 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-27 00:01:34.390988 | orchestrator | 00:01:34.390 STDOUT terraform:  + create 2025-07-27 00:01:34.390995 | orchestrator | 00:01:34.390 STDOUT terraform:  <= read (data resources) 2025-07-27 00:01:34.391006 | orchestrator | 00:01:34.390 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-27 00:01:34.391198 | orchestrator | 00:01:34.391 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-27 00:01:34.391233 | orchestrator | 00:01:34.391 STDOUT terraform:  # (config refers to values not yet known) 2025-07-27 00:01:34.391274 | orchestrator | 00:01:34.391 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-27 00:01:34.391307 | orchestrator | 00:01:34.391 STDOUT terraform:  + checksum = (known after apply) 2025-07-27 00:01:34.391340 | orchestrator | 00:01:34.391 STDOUT terraform:  + created_at = (known after apply) 2025-07-27 00:01:34.391378 | orchestrator | 00:01:34.391 STDOUT terraform:  + file = (known after apply) 2025-07-27 00:01:34.391434 | orchestrator | 00:01:34.391 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.391510 | orchestrator | 00:01:34.391 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.391532 | orchestrator | 00:01:34.391 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-27 00:01:34.391589 | orchestrator | 00:01:34.391 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-27 00:01:34.391596 | orchestrator | 00:01:34.391 STDOUT terraform:  + most_recent = true 2025-07-27 00:01:34.391623 | orchestrator | 00:01:34.391 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.391677 | orchestrator | 00:01:34.391 STDOUT terraform:  + protected = (known after apply) 2025-07-27 00:01:34.391685 | orchestrator | 00:01:34.391 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.391723 | orchestrator | 00:01:34.391 STDOUT terraform:  + schema = (known after apply) 2025-07-27 00:01:34.391767 | orchestrator | 00:01:34.391 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-27 00:01:34.391779 | orchestrator | 00:01:34.391 STDOUT terraform:  + tags = (known after apply) 2025-07-27 00:01:34.391813 | orchestrator | 00:01:34.391 STDOUT terraform:  + updated_at = (known after apply) 2025-07-27 00:01:34.391838 | orchestrator | 00:01:34.391 STDOUT terraform:  } 2025-07-27 00:01:34.396721 | orchestrator | 00:01:34.391 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-27 00:01:34.396775 | orchestrator | 00:01:34.392 STDOUT terraform:  # (config refers to values not yet known) 2025-07-27 00:01:34.396780 | orchestrator | 00:01:34.392 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-27 00:01:34.396785 | orchestrator | 00:01:34.392 STDOUT terraform:  + checksum = (known after apply) 2025-07-27 00:01:34.396790 | orchestrator | 00:01:34.392 STDOUT terraform:  + created_at = (known after apply) 2025-07-27 00:01:34.396804 | orchestrator | 00:01:34.392 STDOUT terraform:  + file = (known after apply) 2025-07-27 00:01:34.396809 | orchestrator | 00:01:34.392 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.396814 | orchestrator | 00:01:34.392 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.396819 | orchestrator | 00:01:34.392 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-27 00:01:34.396824 | orchestrator | 00:01:34.392 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-27 00:01:34.396829 | orchestrator | 00:01:34.392 STDOUT terraform:  + most_recent = true 2025-07-27 00:01:34.396834 | orchestrator | 00:01:34.392 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.396838 | orchestrator | 00:01:34.392 STDOUT terraform:  + protected = (known after apply) 2025-07-27 00:01:34.396843 | orchestrator | 00:01:34.392 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.396847 | orchestrator | 00:01:34.392 STDOUT terraform:  + schema = (known after apply) 2025-07-27 00:01:34.396852 | orchestrator | 00:01:34.392 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-27 00:01:34.396857 | orchestrator | 00:01:34.392 STDOUT terraform:  + tags = (known after apply) 2025-07-27 00:01:34.396862 | orchestrator | 00:01:34.392 STDOUT terraform:  + updated_at = (known after apply) 2025-07-27 00:01:34.396866 | orchestrator | 00:01:34.392 STDOUT terraform:  } 2025-07-27 00:01:34.396871 | orchestrator | 00:01:34.392 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-27 00:01:34.396886 | orchestrator | 00:01:34.392 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-27 00:01:34.396891 | orchestrator | 00:01:34.392 STDOUT terraform:  + content = (known after apply) 2025-07-27 00:01:34.396895 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-27 00:01:34.396900 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-27 00:01:34.396904 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-27 00:01:34.396909 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-27 00:01:34.396914 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-27 00:01:34.396918 | orchestrator | 00:01:34.392 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-27 00:01:34.396923 | orchestrator | 00:01:34.392 STDOUT terraform:  + directory_permission = "0777" 2025-07-27 00:01:34.396928 | orchestrator | 00:01:34.392 STDOUT terraform:  + file_permission = "0644" 2025-07-27 00:01:34.396932 | orchestrator | 00:01:34.392 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-27 00:01:34.396937 | orchestrator | 00:01:34.392 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.396941 | orchestrator | 00:01:34.393 STDOUT terraform:  } 2025-07-27 00:01:34.396946 | orchestrator | 00:01:34.393 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-27 00:01:34.396950 | orchestrator | 00:01:34.393 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-27 00:01:34.396955 | orchestrator | 00:01:34.393 STDOUT terraform:  + content = (known after apply) 2025-07-27 00:01:34.396959 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-27 00:01:34.396964 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-27 00:01:34.396977 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-27 00:01:34.396982 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-27 00:01:34.396987 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-27 00:01:34.396994 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-27 00:01:34.396999 | orchestrator | 00:01:34.393 STDOUT terraform:  + directory_permission = "0777" 2025-07-27 00:01:34.397003 | orchestrator | 00:01:34.393 STDOUT terraform:  + file_permission = "0644" 2025-07-27 00:01:34.397008 | orchestrator | 00:01:34.393 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-27 00:01:34.397012 | orchestrator | 00:01:34.393 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397017 | orchestrator | 00:01:34.393 STDOUT terraform:  } 2025-07-27 00:01:34.397022 | orchestrator | 00:01:34.393 STDOUT terraform:  # local_file.inventory will be created 2025-07-27 00:01:34.397026 | orchestrator | 00:01:34.393 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-27 00:01:34.397031 | orchestrator | 00:01:34.393 STDOUT terraform:  + content = (known after apply) 2025-07-27 00:01:34.397039 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-27 00:01:34.397043 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-27 00:01:34.397048 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-27 00:01:34.397052 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-27 00:01:34.397057 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-27 00:01:34.397061 | orchestrator | 00:01:34.393 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-27 00:01:34.397066 | orchestrator | 00:01:34.393 STDOUT terraform:  + directory_permission = "0777" 2025-07-27 00:01:34.397071 | orchestrator | 00:01:34.393 STDOUT terraform:  + file_permission = "0644" 2025-07-27 00:01:34.397075 | orchestrator | 00:01:34.393 STDOUT terraform:  + filename = "inventory.ci" 2025-07-27 00:01:34.397080 | orchestrator | 00:01:34.393 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397084 | orchestrator | 00:01:34.394 STDOUT terraform:  } 2025-07-27 00:01:34.397089 | orchestrator | 00:01:34.394 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-27 00:01:34.397093 | orchestrator | 00:01:34.394 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-27 00:01:34.397098 | orchestrator | 00:01:34.394 STDOUT terraform:  + content = (sensitive value) 2025-07-27 00:01:34.397103 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-27 00:01:34.397107 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-27 00:01:34.397112 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-27 00:01:34.397116 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-27 00:01:34.397121 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-27 00:01:34.397125 | orchestrator | 00:01:34.394 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-27 00:01:34.397130 | orchestrator | 00:01:34.394 STDOUT terraform:  + directory_permission = "0700" 2025-07-27 00:01:34.397134 | orchestrator | 00:01:34.394 STDOUT terraform:  + file_permission = "0600" 2025-07-27 00:01:34.397139 | orchestrator | 00:01:34.394 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-27 00:01:34.397146 | orchestrator | 00:01:34.394 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397151 | orchestrator | 00:01:34.394 STDOUT terraform:  } 2025-07-27 00:01:34.397158 | orchestrator | 00:01:34.394 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-27 00:01:34.397163 | orchestrator | 00:01:34.394 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-27 00:01:34.397168 | orchestrator | 00:01:34.394 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397173 | orchestrator | 00:01:34.394 STDOUT terraform:  } 2025-07-27 00:01:34.397178 | orchestrator | 00:01:34.394 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-27 00:01:34.397186 | orchestrator | 00:01:34.394 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-27 00:01:34.397190 | orchestrator | 00:01:34.394 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.397195 | orchestrator | 00:01:34.394 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.397200 | orchestrator | 00:01:34.394 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397223 | orchestrator | 00:01:34.394 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.397230 | orchestrator | 00:01:34.394 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.397235 | orchestrator | 00:01:34.394 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-27 00:01:34.397239 | orchestrator | 00:01:34.394 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.397244 | orchestrator | 00:01:34.395 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.397248 | orchestrator | 00:01:34.395 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.397253 | orchestrator | 00:01:34.395 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.397257 | orchestrator | 00:01:34.395 STDOUT terraform:  } 2025-07-27 00:01:34.397262 | orchestrator | 00:01:34.395 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-27 00:01:34.397267 | orchestrator | 00:01:34.395 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.397271 | orchestrator | 00:01:34.395 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.397276 | orchestrator | 00:01:34.395 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.397280 | orchestrator | 00:01:34.395 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397285 | orchestrator | 00:01:34.395 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.397289 | orchestrator | 00:01:34.395 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.397293 | orchestrator | 00:01:34.395 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-27 00:01:34.397298 | orchestrator | 00:01:34.395 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.397302 | orchestrator | 00:01:34.395 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.397307 | orchestrator | 00:01:34.395 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.397311 | orchestrator | 00:01:34.395 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.397316 | orchestrator | 00:01:34.395 STDOUT terraform:  } 2025-07-27 00:01:34.397324 | orchestrator | 00:01:34.396 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-27 00:01:34.397329 | orchestrator | 00:01:34.396 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.397333 | orchestrator | 00:01:34.396 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.397342 | orchestrator | 00:01:34.396 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.397346 | orchestrator | 00:01:34.396 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397351 | orchestrator | 00:01:34.396 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.397355 | orchestrator | 00:01:34.396 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.397366 | orchestrator | 00:01:34.396 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-27 00:01:34.397371 | orchestrator | 00:01:34.396 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.397376 | orchestrator | 00:01:34.396 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.397380 | orchestrator | 00:01:34.396 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.397385 | orchestrator | 00:01:34.396 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.397389 | orchestrator | 00:01:34.396 STDOUT terraform:  } 2025-07-27 00:01:34.397393 | orchestrator | 00:01:34.397 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-27 00:01:34.397398 | orchestrator | 00:01:34.397 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.397405 | orchestrator | 00:01:34.397 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.397409 | orchestrator | 00:01:34.397 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.397516 | orchestrator | 00:01:34.397 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.397525 | orchestrator | 00:01:34.397 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.397592 | orchestrator | 00:01:34.397 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.397674 | orchestrator | 00:01:34.397 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-27 00:01:34.397682 | orchestrator | 00:01:34.397 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.397686 | orchestrator | 00:01:34.397 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.397693 | orchestrator | 00:01:34.397 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.397757 | orchestrator | 00:01:34.397 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.397765 | orchestrator | 00:01:34.397 STDOUT terraform:  } 2025-07-27 00:01:34.398045 | orchestrator | 00:01:34.397 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-27 00:01:34.398090 | orchestrator | 00:01:34.398 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.398128 | orchestrator | 00:01:34.398 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.398152 | orchestrator | 00:01:34.398 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.398240 | orchestrator | 00:01:34.398 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.398250 | orchestrator | 00:01:34.398 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.398283 | orchestrator | 00:01:34.398 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.398315 | orchestrator | 00:01:34.398 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-27 00:01:34.398365 | orchestrator | 00:01:34.398 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.398373 | orchestrator | 00:01:34.398 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.398395 | orchestrator | 00:01:34.398 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.398417 | orchestrator | 00:01:34.398 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.398424 | orchestrator | 00:01:34.398 STDOUT terraform:  } 2025-07-27 00:01:34.398581 | orchestrator | 00:01:34.398 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-27 00:01:34.398623 | orchestrator | 00:01:34.398 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.398660 | orchestrator | 00:01:34.398 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.398698 | orchestrator | 00:01:34.398 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.398720 | orchestrator | 00:01:34.398 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.398755 | orchestrator | 00:01:34.398 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.398790 | orchestrator | 00:01:34.398 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.398832 | orchestrator | 00:01:34.398 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-27 00:01:34.398866 | orchestrator | 00:01:34.398 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.398887 | orchestrator | 00:01:34.398 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.398930 | orchestrator | 00:01:34.398 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.398957 | orchestrator | 00:01:34.398 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.398974 | orchestrator | 00:01:34.398 STDOUT terraform:  } 2025-07-27 00:01:34.399132 | orchestrator | 00:01:34.399 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-27 00:01:34.399192 | orchestrator | 00:01:34.399 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-27 00:01:34.399333 | orchestrator | 00:01:34.399 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.399341 | orchestrator | 00:01:34.399 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.399411 | orchestrator | 00:01:34.399 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.399436 | orchestrator | 00:01:34.399 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.399482 | orchestrator | 00:01:34.399 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.399541 | orchestrator | 00:01:34.399 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-27 00:01:34.399592 | orchestrator | 00:01:34.399 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.399612 | orchestrator | 00:01:34.399 STDOUT terraform:  + size = 80 2025-07-27 00:01:34.399629 | orchestrator | 00:01:34.399 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.399673 | orchestrator | 00:01:34.399 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.399682 | orchestrator | 00:01:34.399 STDOUT terraform:  } 2025-07-27 00:01:34.399876 | orchestrator | 00:01:34.399 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-27 00:01:34.399929 | orchestrator | 00:01:34.399 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.399977 | orchestrator | 00:01:34.399 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.400015 | orchestrator | 00:01:34.399 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.400051 | orchestrator | 00:01:34.400 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.400101 | orchestrator | 00:01:34.400 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.400155 | orchestrator | 00:01:34.400 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-27 00:01:34.400200 | orchestrator | 00:01:34.400 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.400222 | orchestrator | 00:01:34.400 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.400258 | orchestrator | 00:01:34.400 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.400282 | orchestrator | 00:01:34.400 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.400291 | orchestrator | 00:01:34.400 STDOUT terraform:  } 2025-07-27 00:01:34.400452 | orchestrator | 00:01:34.400 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-27 00:01:34.400502 | orchestrator | 00:01:34.400 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.400561 | orchestrator | 00:01:34.400 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.400577 | orchestrator | 00:01:34.400 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.400641 | orchestrator | 00:01:34.400 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.400662 | orchestrator | 00:01:34.400 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.400722 | orchestrator | 00:01:34.400 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-27 00:01:34.400752 | orchestrator | 00:01:34.400 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.400804 | orchestrator | 00:01:34.400 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.400818 | orchestrator | 00:01:34.400 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.400823 | orchestrator | 00:01:34.400 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.400838 | orchestrator | 00:01:34.400 STDOUT terraform:  } 2025-07-27 00:01:34.400989 | orchestrator | 00:01:34.400 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-27 00:01:34.401045 | orchestrator | 00:01:34.400 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.401079 | orchestrator | 00:01:34.401 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.401117 | orchestrator | 00:01:34.401 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.401153 | orchestrator | 00:01:34.401 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.401191 | orchestrator | 00:01:34.401 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.401274 | orchestrator | 00:01:34.401 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-27 00:01:34.401310 | orchestrator | 00:01:34.401 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.401344 | orchestrator | 00:01:34.401 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.401351 | orchestrator | 00:01:34.401 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.401376 | orchestrator | 00:01:34.401 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.401390 | orchestrator | 00:01:34.401 STDOUT terraform:  } 2025-07-27 00:01:34.401528 | orchestrator | 00:01:34.401 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-27 00:01:34.401571 | orchestrator | 00:01:34.401 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.401603 | orchestrator | 00:01:34.401 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.401640 | orchestrator | 00:01:34.401 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.401664 | orchestrator | 00:01:34.401 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.401699 | orchestrator | 00:01:34.401 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.401738 | orchestrator | 00:01:34.401 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-27 00:01:34.401771 | orchestrator | 00:01:34.401 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.401806 | orchestrator | 00:01:34.401 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.401814 | orchestrator | 00:01:34.401 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.401855 | orchestrator | 00:01:34.401 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.401861 | orchestrator | 00:01:34.401 STDOUT terraform:  } 2025-07-27 00:01:34.402662 | orchestrator | 00:01:34.402 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-27 00:01:34.402701 | orchestrator | 00:01:34.402 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.402738 | orchestrator | 00:01:34.402 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.402763 | orchestrator | 00:01:34.402 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.402807 | orchestrator | 00:01:34.402 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.402833 | orchestrator | 00:01:34.402 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.402880 | orchestrator | 00:01:34.402 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-27 00:01:34.402906 | orchestrator | 00:01:34.402 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.402919 | orchestrator | 00:01:34.402 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.402944 | orchestrator | 00:01:34.402 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.402975 | orchestrator | 00:01:34.402 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.402981 | orchestrator | 00:01:34.402 STDOUT terraform:  } 2025-07-27 00:01:34.403025 | orchestrator | 00:01:34.402 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-27 00:01:34.403075 | orchestrator | 00:01:34.403 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.403101 | orchestrator | 00:01:34.403 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.403124 | orchestrator | 00:01:34.403 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.403159 | orchestrator | 00:01:34.403 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.403192 | orchestrator | 00:01:34.403 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.403251 | orchestrator | 00:01:34.403 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-27 00:01:34.403286 | orchestrator | 00:01:34.403 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.403318 | orchestrator | 00:01:34.403 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.403337 | orchestrator | 00:01:34.403 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.403361 | orchestrator | 00:01:34.403 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.403370 | orchestrator | 00:01:34.403 STDOUT terraform:  } 2025-07-27 00:01:34.403414 | orchestrator | 00:01:34.403 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-27 00:01:34.403453 | orchestrator | 00:01:34.403 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.403490 | orchestrator | 00:01:34.403 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.403513 | orchestrator | 00:01:34.403 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.403557 | orchestrator | 00:01:34.403 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.403590 | orchestrator | 00:01:34.403 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.403643 | orchestrator | 00:01:34.403 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-27 00:01:34.403655 | orchestrator | 00:01:34.403 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.403681 | orchestrator | 00:01:34.403 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.403714 | orchestrator | 00:01:34.403 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.403721 | orchestrator | 00:01:34.403 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.403739 | orchestrator | 00:01:34.403 STDOUT terraform:  } 2025-07-27 00:01:34.403783 | orchestrator | 00:01:34.403 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-27 00:01:34.403825 | orchestrator | 00:01:34.403 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.403864 | orchestrator | 00:01:34.403 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.403906 | orchestrator | 00:01:34.403 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.403943 | orchestrator | 00:01:34.403 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.403982 | orchestrator | 00:01:34.403 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.404019 | orchestrator | 00:01:34.403 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-27 00:01:34.404057 | orchestrator | 00:01:34.404 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.404083 | orchestrator | 00:01:34.404 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.404100 | orchestrator | 00:01:34.404 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.404128 | orchestrator | 00:01:34.404 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.404134 | orchestrator | 00:01:34.404 STDOUT terraform:  } 2025-07-27 00:01:34.404178 | orchestrator | 00:01:34.404 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-27 00:01:34.404235 | orchestrator | 00:01:34.404 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-27 00:01:34.404287 | orchestrator | 00:01:34.404 STDOUT terraform:  + attachment = (known after apply) 2025-07-27 00:01:34.404307 | orchestrator | 00:01:34.404 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.404359 | orchestrator | 00:01:34.404 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.404386 | orchestrator | 00:01:34.404 STDOUT terraform:  + metadata = (known after apply) 2025-07-27 00:01:34.404425 | orchestrator | 00:01:34.404 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-27 00:01:34.404462 | orchestrator | 00:01:34.404 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.404481 | orchestrator | 00:01:34.404 STDOUT terraform:  + size = 20 2025-07-27 00:01:34.404505 | orchestrator | 00:01:34.404 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-27 00:01:34.404536 | orchestrator | 00:01:34.404 STDOUT terraform:  + volume_type = "ssd" 2025-07-27 00:01:34.404541 | orchestrator | 00:01:34.404 STDOUT terraform:  } 2025-07-27 00:01:34.404580 | orchestrator | 00:01:34.404 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-27 00:01:34.404623 | orchestrator | 00:01:34.404 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-27 00:01:34.404658 | orchestrator | 00:01:34.404 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.404701 | orchestrator | 00:01:34.404 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.404739 | orchestrator | 00:01:34.404 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.404779 | orchestrator | 00:01:34.404 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.404784 | orchestrator | 00:01:34.404 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.404797 | orchestrator | 00:01:34.404 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.404848 | orchestrator | 00:01:34.404 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.404855 | orchestrator | 00:01:34.404 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.404890 | orchestrator | 00:01:34.404 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-27 00:01:34.404911 | orchestrator | 00:01:34.404 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.404956 | orchestrator | 00:01:34.404 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.404986 | orchestrator | 00:01:34.404 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.405011 | orchestrator | 00:01:34.404 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.405048 | orchestrator | 00:01:34.405 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.405076 | orchestrator | 00:01:34.405 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.405129 | orchestrator | 00:01:34.405 STDOUT terraform:  + name = "testbed-manager" 2025-07-27 00:01:34.405138 | orchestrator | 00:01:34.405 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.405154 | orchestrator | 00:01:34.405 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.405246 | orchestrator | 00:01:34.405 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.405254 | orchestrator | 00:01:34.405 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.405260 | orchestrator | 00:01:34.405 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.405287 | orchestrator | 00:01:34.405 STDOUT terraform:  + user_data = (sensitive value) 2025-07-27 00:01:34.405293 | orchestrator | 00:01:34.405 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.405319 | orchestrator | 00:01:34.405 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.405347 | orchestrator | 00:01:34.405 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.405379 | orchestrator | 00:01:34.405 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.405400 | orchestrator | 00:01:34.405 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.405432 | orchestrator | 00:01:34.405 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.405466 | orchestrator | 00:01:34.405 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.405473 | orchestrator | 00:01:34.405 STDOUT terraform:  } 2025-07-27 00:01:34.405489 | orchestrator | 00:01:34.405 STDOUT terraform:  + network { 2025-07-27 00:01:34.405507 | orchestrator | 00:01:34.405 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.405537 | orchestrator | 00:01:34.405 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.405575 | orchestrator | 00:01:34.405 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.405616 | orchestrator | 00:01:34.405 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.405630 | orchestrator | 00:01:34.405 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.405662 | orchestrator | 00:01:34.405 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.405698 | orchestrator | 00:01:34.405 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.405705 | orchestrator | 00:01:34.405 STDOUT terraform:  } 2025-07-27 00:01:34.405710 | orchestrator | 00:01:34.405 STDOUT terraform:  } 2025-07-27 00:01:34.406783 | orchestrator | 00:01:34.406 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-27 00:01:34.406824 | orchestrator | 00:01:34.406 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.406829 | orchestrator | 00:01:34.406 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.406834 | orchestrator | 00:01:34.406 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.406838 | orchestrator | 00:01:34.406 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.406845 | orchestrator | 00:01:34.406 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.406849 | orchestrator | 00:01:34.406 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.406853 | orchestrator | 00:01:34.406 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.406857 | orchestrator | 00:01:34.406 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.406863 | orchestrator | 00:01:34.406 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.406889 | orchestrator | 00:01:34.406 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.406914 | orchestrator | 00:01:34.406 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.406958 | orchestrator | 00:01:34.406 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.406981 | orchestrator | 00:01:34.406 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.407017 | orchestrator | 00:01:34.406 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.407053 | orchestrator | 00:01:34.407 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.407078 | orchestrator | 00:01:34.407 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.407121 | orchestrator | 00:01:34.407 STDOUT terraform:  + name = "testbed-node-0" 2025-07-27 00:01:34.407127 | orchestrator | 00:01:34.407 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.407159 | orchestrator | 00:01:34.407 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.407255 | orchestrator | 00:01:34.407 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.407471 | orchestrator | 00:01:34.407 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.407512 | orchestrator | 00:01:34.407 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.407554 | orchestrator | 00:01:34.407 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.407562 | orchestrator | 00:01:34.407 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.407599 | orchestrator | 00:01:34.407 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.407617 | orchestrator | 00:01:34.407 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.407647 | orchestrator | 00:01:34.407 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.407683 | orchestrator | 00:01:34.407 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.407702 | orchestrator | 00:01:34.407 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.407741 | orchestrator | 00:01:34.407 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.407756 | orchestrator | 00:01:34.407 STDOUT terraform:  } 2025-07-27 00:01:34.407771 | orchestrator | 00:01:34.407 STDOUT terraform:  + network { 2025-07-27 00:01:34.407787 | orchestrator | 00:01:34.407 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.407825 | orchestrator | 00:01:34.407 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.407854 | orchestrator | 00:01:34.407 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.407876 | orchestrator | 00:01:34.407 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.407907 | orchestrator | 00:01:34.407 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.407944 | orchestrator | 00:01:34.407 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.407969 | orchestrator | 00:01:34.407 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.407975 | orchestrator | 00:01:34.407 STDOUT terraform:  } 2025-07-27 00:01:34.407990 | orchestrator | 00:01:34.407 STDOUT terraform:  } 2025-07-27 00:01:34.408040 | orchestrator | 00:01:34.407 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-27 00:01:34.408073 | orchestrator | 00:01:34.408 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.408115 | orchestrator | 00:01:34.408 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.408138 | orchestrator | 00:01:34.408 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.408174 | orchestrator | 00:01:34.408 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.408229 | orchestrator | 00:01:34.408 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.408251 | orchestrator | 00:01:34.408 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.408278 | orchestrator | 00:01:34.408 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.408312 | orchestrator | 00:01:34.408 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.408345 | orchestrator | 00:01:34.408 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.408381 | orchestrator | 00:01:34.408 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.408400 | orchestrator | 00:01:34.408 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.408433 | orchestrator | 00:01:34.408 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.408477 | orchestrator | 00:01:34.408 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.408502 | orchestrator | 00:01:34.408 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.408558 | orchestrator | 00:01:34.408 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.408565 | orchestrator | 00:01:34.408 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.408596 | orchestrator | 00:01:34.408 STDOUT terraform:  + name = "testbed-node-1" 2025-07-27 00:01:34.408634 | orchestrator | 00:01:34.408 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.408654 | orchestrator | 00:01:34.408 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.408687 | orchestrator | 00:01:34.408 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.408717 | orchestrator | 00:01:34.408 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.408742 | orchestrator | 00:01:34.408 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.408799 | orchestrator | 00:01:34.408 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.408806 | orchestrator | 00:01:34.408 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.408826 | orchestrator | 00:01:34.408 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.408854 | orchestrator | 00:01:34.408 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.408876 | orchestrator | 00:01:34.408 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.408905 | orchestrator | 00:01:34.408 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.408931 | orchestrator | 00:01:34.408 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.408970 | orchestrator | 00:01:34.408 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.408977 | orchestrator | 00:01:34.408 STDOUT terraform:  } 2025-07-27 00:01:34.408993 | orchestrator | 00:01:34.408 STDOUT terraform:  + network { 2025-07-27 00:01:34.409012 | orchestrator | 00:01:34.408 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.409043 | orchestrator | 00:01:34.409 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.409073 | orchestrator | 00:01:34.409 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.409113 | orchestrator | 00:01:34.409 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.409135 | orchestrator | 00:01:34.409 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.409165 | orchestrator | 00:01:34.409 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.409195 | orchestrator | 00:01:34.409 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.409214 | orchestrator | 00:01:34.409 STDOUT terraform:  } 2025-07-27 00:01:34.409236 | orchestrator | 00:01:34.409 STDOUT terraform:  } 2025-07-27 00:01:34.409281 | orchestrator | 00:01:34.409 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-27 00:01:34.409319 | orchestrator | 00:01:34.409 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.409355 | orchestrator | 00:01:34.409 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.409389 | orchestrator | 00:01:34.409 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.409430 | orchestrator | 00:01:34.409 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.409464 | orchestrator | 00:01:34.409 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.409482 | orchestrator | 00:01:34.409 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.409510 | orchestrator | 00:01:34.409 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.409537 | orchestrator | 00:01:34.409 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.409589 | orchestrator | 00:01:34.409 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.409605 | orchestrator | 00:01:34.409 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.409628 | orchestrator | 00:01:34.409 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.409669 | orchestrator | 00:01:34.409 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.409696 | orchestrator | 00:01:34.409 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.409730 | orchestrator | 00:01:34.409 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.409763 | orchestrator | 00:01:34.409 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.409787 | orchestrator | 00:01:34.409 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.409828 | orchestrator | 00:01:34.409 STDOUT terraform:  + name = "testbed-node-2" 2025-07-27 00:01:34.409835 | orchestrator | 00:01:34.409 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.409870 | orchestrator | 00:01:34.409 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.409910 | orchestrator | 00:01:34.409 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.409925 | orchestrator | 00:01:34.409 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.409961 | orchestrator | 00:01:34.409 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.410011 | orchestrator | 00:01:34.409 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.410053 | orchestrator | 00:01:34.410 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.410061 | orchestrator | 00:01:34.410 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.410084 | orchestrator | 00:01:34.410 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.410125 | orchestrator | 00:01:34.410 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.410135 | orchestrator | 00:01:34.410 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.410165 | orchestrator | 00:01:34.410 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.410238 | orchestrator | 00:01:34.410 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.410254 | orchestrator | 00:01:34.410 STDOUT terraform:  } 2025-07-27 00:01:34.410258 | orchestrator | 00:01:34.410 STDOUT terraform:  + network { 2025-07-27 00:01:34.410280 | orchestrator | 00:01:34.410 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.410296 | orchestrator | 00:01:34.410 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.410326 | orchestrator | 00:01:34.410 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.410361 | orchestrator | 00:01:34.410 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.410388 | orchestrator | 00:01:34.410 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.410434 | orchestrator | 00:01:34.410 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.410451 | orchestrator | 00:01:34.410 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.410457 | orchestrator | 00:01:34.410 STDOUT terraform:  } 2025-07-27 00:01:34.410476 | orchestrator | 00:01:34.410 STDOUT terraform:  } 2025-07-27 00:01:34.410517 | orchestrator | 00:01:34.410 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-27 00:01:34.410557 | orchestrator | 00:01:34.410 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.410595 | orchestrator | 00:01:34.410 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.410624 | orchestrator | 00:01:34.410 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.410673 | orchestrator | 00:01:34.410 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.410693 | orchestrator | 00:01:34.410 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.410715 | orchestrator | 00:01:34.410 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.410749 | orchestrator | 00:01:34.410 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.410769 | orchestrator | 00:01:34.410 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.410802 | orchestrator | 00:01:34.410 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.410832 | orchestrator | 00:01:34.410 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.410854 | orchestrator | 00:01:34.410 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.410886 | orchestrator | 00:01:34.410 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.410929 | orchestrator | 00:01:34.410 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.410955 | orchestrator | 00:01:34.410 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.410990 | orchestrator | 00:01:34.410 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.411015 | orchestrator | 00:01:34.410 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.411045 | orchestrator | 00:01:34.411 STDOUT terraform:  + name = "testbed-node-3" 2025-07-27 00:01:34.411067 | orchestrator | 00:01:34.411 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.411100 | orchestrator | 00:01:34.411 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.411134 | orchestrator | 00:01:34.411 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.411156 | orchestrator | 00:01:34.411 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.411189 | orchestrator | 00:01:34.411 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.411251 | orchestrator | 00:01:34.411 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.411284 | orchestrator | 00:01:34.411 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.411290 | orchestrator | 00:01:34.411 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.411312 | orchestrator | 00:01:34.411 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.411340 | orchestrator | 00:01:34.411 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.411367 | orchestrator | 00:01:34.411 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.411394 | orchestrator | 00:01:34.411 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.411431 | orchestrator | 00:01:34.411 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.411437 | orchestrator | 00:01:34.411 STDOUT terraform:  } 2025-07-27 00:01:34.411454 | orchestrator | 00:01:34.411 STDOUT terraform:  + network { 2025-07-27 00:01:34.411474 | orchestrator | 00:01:34.411 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.411503 | orchestrator | 00:01:34.411 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.411534 | orchestrator | 00:01:34.411 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.411565 | orchestrator | 00:01:34.411 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.411595 | orchestrator | 00:01:34.411 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.411627 | orchestrator | 00:01:34.411 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.411667 | orchestrator | 00:01:34.411 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.411672 | orchestrator | 00:01:34.411 STDOUT terraform:  } 2025-07-27 00:01:34.411678 | orchestrator | 00:01:34.411 STDOUT terraform:  } 2025-07-27 00:01:34.411747 | orchestrator | 00:01:34.411 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-27 00:01:34.411788 | orchestrator | 00:01:34.411 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.411822 | orchestrator | 00:01:34.411 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.411856 | orchestrator | 00:01:34.411 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.411893 | orchestrator | 00:01:34.411 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.411927 | orchestrator | 00:01:34.411 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.411949 | orchestrator | 00:01:34.411 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.411969 | orchestrator | 00:01:34.411 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.412002 | orchestrator | 00:01:34.411 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.412036 | orchestrator | 00:01:34.411 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.412065 | orchestrator | 00:01:34.412 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.412087 | orchestrator | 00:01:34.412 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.412120 | orchestrator | 00:01:34.412 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.412155 | orchestrator | 00:01:34.412 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.412189 | orchestrator | 00:01:34.412 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.412264 | orchestrator | 00:01:34.412 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.412289 | orchestrator | 00:01:34.412 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.412321 | orchestrator | 00:01:34.412 STDOUT terraform:  + name = "testbed-node-4" 2025-07-27 00:01:34.412346 | orchestrator | 00:01:34.412 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.412381 | orchestrator | 00:01:34.412 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.412415 | orchestrator | 00:01:34.412 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.413172 | orchestrator | 00:01:34.412 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.413195 | orchestrator | 00:01:34.413 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.413269 | orchestrator | 00:01:34.413 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.413285 | orchestrator | 00:01:34.413 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.413315 | orchestrator | 00:01:34.413 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.413345 | orchestrator | 00:01:34.413 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.413378 | orchestrator | 00:01:34.413 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.413408 | orchestrator | 00:01:34.413 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.413440 | orchestrator | 00:01:34.413 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.413481 | orchestrator | 00:01:34.413 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.413497 | orchestrator | 00:01:34.413 STDOUT terraform:  } 2025-07-27 00:01:34.413512 | orchestrator | 00:01:34.413 STDOUT terraform:  + network { 2025-07-27 00:01:34.413533 | orchestrator | 00:01:34.413 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.413565 | orchestrator | 00:01:34.413 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.413596 | orchestrator | 00:01:34.413 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.413630 | orchestrator | 00:01:34.413 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.413662 | orchestrator | 00:01:34.413 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.413707 | orchestrator | 00:01:34.413 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.413729 | orchestrator | 00:01:34.413 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.413743 | orchestrator | 00:01:34.413 STDOUT terraform:  } 2025-07-27 00:01:34.413749 | orchestrator | 00:01:34.413 STDOUT terraform:  } 2025-07-27 00:01:34.413798 | orchestrator | 00:01:34.413 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-27 00:01:34.413842 | orchestrator | 00:01:34.413 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-27 00:01:34.413878 | orchestrator | 00:01:34.413 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-27 00:01:34.413915 | orchestrator | 00:01:34.413 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-27 00:01:34.413952 | orchestrator | 00:01:34.413 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-27 00:01:34.413989 | orchestrator | 00:01:34.413 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.414026 | orchestrator | 00:01:34.413 STDOUT terraform:  + availability_zone = "nova" 2025-07-27 00:01:34.414052 | orchestrator | 00:01:34.414 STDOUT terraform:  + config_drive = true 2025-07-27 00:01:34.414089 | orchestrator | 00:01:34.414 STDOUT terraform:  + created = (known after apply) 2025-07-27 00:01:34.414125 | orchestrator | 00:01:34.414 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-27 00:01:34.414161 | orchestrator | 00:01:34.414 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-27 00:01:34.414181 | orchestrator | 00:01:34.414 STDOUT terraform:  + force_delete = false 2025-07-27 00:01:34.414247 | orchestrator | 00:01:34.414 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-27 00:01:34.414267 | orchestrator | 00:01:34.414 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.414305 | orchestrator | 00:01:34.414 STDOUT terraform:  + image_id = (known after apply) 2025-07-27 00:01:34.414344 | orchestrator | 00:01:34.414 STDOUT terraform:  + image_name = (known after apply) 2025-07-27 00:01:34.414368 | orchestrator | 00:01:34.414 STDOUT terraform:  + key_pair = "testbed" 2025-07-27 00:01:34.414400 | orchestrator | 00:01:34.414 STDOUT terraform:  + name = "testbed-node-5" 2025-07-27 00:01:34.414426 | orchestrator | 00:01:34.414 STDOUT terraform:  + power_state = "active" 2025-07-27 00:01:34.414463 | orchestrator | 00:01:34.414 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.414500 | orchestrator | 00:01:34.414 STDOUT terraform:  + security_groups = (known after apply) 2025-07-27 00:01:34.414524 | orchestrator | 00:01:34.414 STDOUT terraform:  + stop_before_destroy = false 2025-07-27 00:01:34.414561 | orchestrator | 00:01:34.414 STDOUT terraform:  + updated = (known after apply) 2025-07-27 00:01:34.414612 | orchestrator | 00:01:34.414 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-27 00:01:34.414627 | orchestrator | 00:01:34.414 STDOUT terraform:  + block_device { 2025-07-27 00:01:34.414652 | orchestrator | 00:01:34.414 STDOUT terraform:  + boot_index = 0 2025-07-27 00:01:34.414681 | orchestrator | 00:01:34.414 STDOUT terraform:  + delete_on_termination = false 2025-07-27 00:01:34.414711 | orchestrator | 00:01:34.414 STDOUT terraform:  + destination_type = "volume" 2025-07-27 00:01:34.414740 | orchestrator | 00:01:34.414 STDOUT terraform:  + multiattach = false 2025-07-27 00:01:34.414771 | orchestrator | 00:01:34.414 STDOUT terraform:  + source_type = "volume" 2025-07-27 00:01:34.414810 | orchestrator | 00:01:34.414 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.414824 | orchestrator | 00:01:34.414 STDOUT terraform:  } 2025-07-27 00:01:34.414840 | orchestrator | 00:01:34.414 STDOUT terraform:  + network { 2025-07-27 00:01:34.414862 | orchestrator | 00:01:34.414 STDOUT terraform:  + access_network = false 2025-07-27 00:01:34.414894 | orchestrator | 00:01:34.414 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-27 00:01:34.414926 | orchestrator | 00:01:34.414 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-27 00:01:34.414960 | orchestrator | 00:01:34.414 STDOUT terraform:  + mac = (known after apply) 2025-07-27 00:01:34.414992 | orchestrator | 00:01:34.414 STDOUT terraform:  + name = (known after apply) 2025-07-27 00:01:34.415024 | orchestrator | 00:01:34.414 STDOUT terraform:  + port = (known after apply) 2025-07-27 00:01:34.415057 | orchestrator | 00:01:34.415 STDOUT terraform:  + uuid = (known after apply) 2025-07-27 00:01:34.415071 | orchestrator | 00:01:34.415 STDOUT terraform:  } 2025-07-27 00:01:34.415083 | orchestrator | 00:01:34.415 STDOUT terraform:  } 2025-07-27 00:01:34.415120 | orchestrator | 00:01:34.415 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-27 00:01:34.415155 | orchestrator | 00:01:34.415 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-27 00:01:34.415185 | orchestrator | 00:01:34.415 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-27 00:01:34.415227 | orchestrator | 00:01:34.415 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.415248 | orchestrator | 00:01:34.415 STDOUT terraform:  + name = "testbed" 2025-07-27 00:01:34.415273 | orchestrator | 00:01:34.415 STDOUT terraform:  + private_key = (sensitive value) 2025-07-27 00:01:34.415303 | orchestrator | 00:01:34.415 STDOUT terraform:  + public_key = (known after apply) 2025-07-27 00:01:34.415332 | orchestrator | 00:01:34.415 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.415366 | orchestrator | 00:01:34.415 STDOUT terraform:  + user_id = (known after apply) 2025-07-27 00:01:34.415373 | orchestrator | 00:01:34.415 STDOUT terraform:  } 2025-07-27 00:01:34.415422 | orchestrator | 00:01:34.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-27 00:01:34.415472 | orchestrator | 00:01:34.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.415502 | orchestrator | 00:01:34.415 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.415531 | orchestrator | 00:01:34.415 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.415560 | orchestrator | 00:01:34.415 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.415609 | orchestrator | 00:01:34.415 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.415638 | orchestrator | 00:01:34.415 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.415653 | orchestrator | 00:01:34.415 STDOUT terraform:  } 2025-07-27 00:01:34.415706 | orchestrator | 00:01:34.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-27 00:01:34.415756 | orchestrator | 00:01:34.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.415787 | orchestrator | 00:01:34.415 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.415817 | orchestrator | 00:01:34.415 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.415845 | orchestrator | 00:01:34.415 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.415875 | orchestrator | 00:01:34.415 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.415903 | orchestrator | 00:01:34.415 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.415918 | orchestrator | 00:01:34.415 STDOUT terraform:  } 2025-07-27 00:01:34.415970 | orchestrator | 00:01:34.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-27 00:01:34.416020 | orchestrator | 00:01:34.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.416049 | orchestrator | 00:01:34.416 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.416079 | orchestrator | 00:01:34.416 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.416108 | orchestrator | 00:01:34.416 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.416137 | orchestrator | 00:01:34.416 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.416167 | orchestrator | 00:01:34.416 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.416180 | orchestrator | 00:01:34.416 STDOUT terraform:  } 2025-07-27 00:01:34.416268 | orchestrator | 00:01:34.416 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-27 00:01:34.416319 | orchestrator | 00:01:34.416 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.416352 | orchestrator | 00:01:34.416 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.416380 | orchestrator | 00:01:34.416 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.416409 | orchestrator | 00:01:34.416 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.416438 | orchestrator | 00:01:34.416 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.416468 | orchestrator | 00:01:34.416 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.416474 | orchestrator | 00:01:34.416 STDOUT terraform:  } 2025-07-27 00:01:34.416537 | orchestrator | 00:01:34.416 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-27 00:01:34.416589 | orchestrator | 00:01:34.416 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.416618 | orchestrator | 00:01:34.416 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.416653 | orchestrator | 00:01:34.416 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.416678 | orchestrator | 00:01:34.416 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.416708 | orchestrator | 00:01:34.416 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.416737 | orchestrator | 00:01:34.416 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.416743 | orchestrator | 00:01:34.416 STDOUT terraform:  } 2025-07-27 00:01:34.416797 | orchestrator | 00:01:34.416 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-27 00:01:34.416846 | orchestrator | 00:01:34.416 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.416875 | orchestrator | 00:01:34.416 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.416905 | orchestrator | 00:01:34.416 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.416934 | orchestrator | 00:01:34.416 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.416964 | orchestrator | 00:01:34.416 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.416992 | orchestrator | 00:01:34.416 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.417007 | orchestrator | 00:01:34.416 STDOUT terraform:  } 2025-07-27 00:01:34.417060 | orchestrator | 00:01:34.417 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-27 00:01:34.417110 | orchestrator | 00:01:34.417 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.417139 | orchestrator | 00:01:34.417 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.417169 | orchestrator | 00:01:34.417 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.417197 | orchestrator | 00:01:34.417 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.417245 | orchestrator | 00:01:34.417 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.417983 | orchestrator | 00:01:34.417 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.418005 | orchestrator | 00:01:34.417 STDOUT terraform:  } 2025-07-27 00:01:34.418010 | orchestrator | 00:01:34.417 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-27 00:01:34.418036 | orchestrator | 00:01:34.417 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.418040 | orchestrator | 00:01:34.417 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.418044 | orchestrator | 00:01:34.417 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.418048 | orchestrator | 00:01:34.417 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.418052 | orchestrator | 00:01:34.417 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.418064 | orchestrator | 00:01:34.417 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.418068 | orchestrator | 00:01:34.417 STDOUT terraform:  } 2025-07-27 00:01:34.418072 | orchestrator | 00:01:34.417 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-27 00:01:34.418075 | orchestrator | 00:01:34.417 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-27 00:01:34.418079 | orchestrator | 00:01:34.417 STDOUT terraform:  + device = (known after apply) 2025-07-27 00:01:34.418083 | orchestrator | 00:01:34.417 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.418087 | orchestrator | 00:01:34.417 STDOUT terraform:  + instance_id = (known after apply) 2025-07-27 00:01:34.418090 | orchestrator | 00:01:34.417 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.418094 | orchestrator | 00:01:34.417 STDOUT terraform:  + volume_id = (known after apply) 2025-07-27 00:01:34.418098 | orchestrator | 00:01:34.417 STDOUT terraform:  } 2025-07-27 00:01:34.418105 | orchestrator | 00:01:34.417 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-27 00:01:34.418110 | orchestrator | 00:01:34.417 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-27 00:01:34.418114 | orchestrator | 00:01:34.417 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-27 00:01:34.418118 | orchestrator | 00:01:34.417 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-27 00:01:34.418122 | orchestrator | 00:01:34.417 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.418129 | orchestrator | 00:01:34.417 STDOUT terraform:  + port_id = (known after apply) 2025-07-27 00:01:34.418133 | orchestrator | 00:01:34.417 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.418199 | orchestrator | 00:01:34.418 STDOUT terraform:  } 2025-07-27 00:01:34.418277 | orchestrator | 00:01:34.418 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-27 00:01:34.418334 | orchestrator | 00:01:34.418 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-27 00:01:34.418364 | orchestrator | 00:01:34.418 STDOUT terraform:  + address = (known after apply) 2025-07-27 00:01:34.418394 | orchestrator | 00:01:34.418 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.418424 | orchestrator | 00:01:34.418 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-27 00:01:34.418454 | orchestrator | 00:01:34.418 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.418483 | orchestrator | 00:01:34.418 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-27 00:01:34.418513 | orchestrator | 00:01:34.418 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.418540 | orchestrator | 00:01:34.418 STDOUT terraform:  + pool = "public" 2025-07-27 00:01:34.418571 | orchestrator | 00:01:34.418 STDOUT terraform:  + port_id = (known after apply) 2025-07-27 00:01:34.418602 | orchestrator | 00:01:34.418 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.418632 | orchestrator | 00:01:34.418 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.418662 | orchestrator | 00:01:34.418 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.418677 | orchestrator | 00:01:34.418 STDOUT terraform:  } 2025-07-27 00:01:34.418730 | orchestrator | 00:01:34.418 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-27 00:01:34.418782 | orchestrator | 00:01:34.418 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-27 00:01:34.418830 | orchestrator | 00:01:34.418 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.418872 | orchestrator | 00:01:34.418 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.418899 | orchestrator | 00:01:34.418 STDOUT terraform:  + availability_zone_hints = [ 2025-07-27 00:01:34.418917 | orchestrator | 00:01:34.418 STDOUT terraform:  + "nova", 2025-07-27 00:01:34.418934 | orchestrator | 00:01:34.418 STDOUT terraform:  ] 2025-07-27 00:01:34.418978 | orchestrator | 00:01:34.418 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-27 00:01:34.419022 | orchestrator | 00:01:34.418 STDOUT terraform:  + external = (known after apply) 2025-07-27 00:01:34.419067 | orchestrator | 00:01:34.419 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.419112 | orchestrator | 00:01:34.419 STDOUT terraform:  + mtu = (known after apply) 2025-07-27 00:01:34.419159 | orchestrator | 00:01:34.419 STDOUT terraform:  + name = "net-testbed-management" 2025-07-27 00:01:34.419223 | orchestrator | 00:01:34.419 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.419285 | orchestrator | 00:01:34.419 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.419339 | orchestrator | 00:01:34.419 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.419383 | orchestrator | 00:01:34.419 STDOUT terraform:  + shared = (known after apply) 2025-07-27 00:01:34.419428 | orchestrator | 00:01:34.419 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.419469 | orchestrator | 00:01:34.419 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-27 00:01:34.419495 | orchestrator | 00:01:34.419 STDOUT terraform:  + segments (known after apply) 2025-07-27 00:01:34.419513 | orchestrator | 00:01:34.419 STDOUT terraform:  } 2025-07-27 00:01:34.419565 | orchestrator | 00:01:34.419 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-27 00:01:34.419618 | orchestrator | 00:01:34.419 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-27 00:01:34.419661 | orchestrator | 00:01:34.419 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.419703 | orchestrator | 00:01:34.419 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.419742 | orchestrator | 00:01:34.419 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.419785 | orchestrator | 00:01:34.419 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.419840 | orchestrator | 00:01:34.419 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.419875 | orchestrator | 00:01:34.419 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.419916 | orchestrator | 00:01:34.419 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.419958 | orchestrator | 00:01:34.419 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426084 | orchestrator | 00:01:34.419 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426121 | orchestrator | 00:01:34.419 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426126 | orchestrator | 00:01:34.420 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426130 | orchestrator | 00:01:34.420 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426134 | orchestrator | 00:01:34.420 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426138 | orchestrator | 00:01:34.420 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426142 | orchestrator | 00:01:34.420 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426146 | orchestrator | 00:01:34.420 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426150 | orchestrator | 00:01:34.420 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426154 | orchestrator | 00:01:34.420 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426159 | orchestrator | 00:01:34.420 STDOUT terraform:  } 2025-07-27 00:01:34.426162 | orchestrator | 00:01:34.420 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426166 | orchestrator | 00:01:34.420 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426170 | orchestrator | 00:01:34.420 STDOUT terraform:  } 2025-07-27 00:01:34.426174 | orchestrator | 00:01:34.420 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426177 | orchestrator | 00:01:34.420 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426181 | orchestrator | 00:01:34.420 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-27 00:01:34.426185 | orchestrator | 00:01:34.420 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426188 | orchestrator | 00:01:34.420 STDOUT terraform:  } 2025-07-27 00:01:34.426226 | orchestrator | 00:01:34.420 STDOUT terraform:  } 2025-07-27 00:01:34.426231 | orchestrator | 00:01:34.420 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-27 00:01:34.426236 | orchestrator | 00:01:34.420 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426245 | orchestrator | 00:01:34.420 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.426249 | orchestrator | 00:01:34.420 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.426253 | orchestrator | 00:01:34.420 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.426256 | orchestrator | 00:01:34.420 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.426260 | orchestrator | 00:01:34.420 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.426273 | orchestrator | 00:01:34.420 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.426277 | orchestrator | 00:01:34.420 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.426281 | orchestrator | 00:01:34.420 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426284 | orchestrator | 00:01:34.420 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426288 | orchestrator | 00:01:34.420 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426292 | orchestrator | 00:01:34.420 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426296 | orchestrator | 00:01:34.420 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426299 | orchestrator | 00:01:34.421 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426303 | orchestrator | 00:01:34.421 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426307 | orchestrator | 00:01:34.421 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426317 | orchestrator | 00:01:34.421 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426321 | orchestrator | 00:01:34.421 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426325 | orchestrator | 00:01:34.421 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426329 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426333 | orchestrator | 00:01:34.421 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426336 | orchestrator | 00:01:34.421 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.426340 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426344 | orchestrator | 00:01:34.421 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426348 | orchestrator | 00:01:34.421 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426352 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426356 | orchestrator | 00:01:34.421 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426359 | orchestrator | 00:01:34.421 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.426363 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426367 | orchestrator | 00:01:34.421 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426383 | orchestrator | 00:01:34.421 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426387 | orchestrator | 00:01:34.421 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-27 00:01:34.426391 | orchestrator | 00:01:34.421 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426395 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426399 | orchestrator | 00:01:34.421 STDOUT terraform:  } 2025-07-27 00:01:34.426403 | orchestrator | 00:01:34.421 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-27 00:01:34.426407 | orchestrator | 00:01:34.421 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426414 | orchestrator | 00:01:34.421 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.426418 | orchestrator | 00:01:34.421 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.426422 | orchestrator | 00:01:34.421 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.426426 | orchestrator | 00:01:34.421 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.426430 | orchestrator | 00:01:34.421 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.426434 | orchestrator | 00:01:34.421 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.426438 | orchestrator | 00:01:34.421 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.426441 | orchestrator | 00:01:34.421 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426445 | orchestrator | 00:01:34.421 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426449 | orchestrator | 00:01:34.421 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426453 | orchestrator | 00:01:34.421 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426457 | orchestrator | 00:01:34.421 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426460 | orchestrator | 00:01:34.422 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426464 | orchestrator | 00:01:34.422 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426468 | orchestrator | 00:01:34.422 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426472 | orchestrator | 00:01:34.422 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426475 | orchestrator | 00:01:34.422 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426479 | orchestrator | 00:01:34.422 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426490 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426495 | orchestrator | 00:01:34.422 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426499 | orchestrator | 00:01:34.422 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.426503 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426507 | orchestrator | 00:01:34.422 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426510 | orchestrator | 00:01:34.422 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426514 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426518 | orchestrator | 00:01:34.422 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426522 | orchestrator | 00:01:34.422 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.426526 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426530 | orchestrator | 00:01:34.422 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426534 | orchestrator | 00:01:34.422 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426540 | orchestrator | 00:01:34.422 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-27 00:01:34.426544 | orchestrator | 00:01:34.422 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426548 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426552 | orchestrator | 00:01:34.422 STDOUT terraform:  } 2025-07-27 00:01:34.426556 | orchestrator | 00:01:34.422 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-27 00:01:34.426560 | orchestrator | 00:01:34.422 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426563 | orchestrator | 00:01:34.422 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.426567 | orchestrator | 00:01:34.422 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.426574 | orchestrator | 00:01:34.422 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.426578 | orchestrator | 00:01:34.422 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.426582 | orchestrator | 00:01:34.422 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.426586 | orchestrator | 00:01:34.422 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.426592 | orchestrator | 00:01:34.422 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.426596 | orchestrator | 00:01:34.422 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426600 | orchestrator | 00:01:34.422 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426604 | orchestrator | 00:01:34.422 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426608 | orchestrator | 00:01:34.423 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426611 | orchestrator | 00:01:34.423 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426615 | orchestrator | 00:01:34.423 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426619 | orchestrator | 00:01:34.423 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426623 | orchestrator | 00:01:34.423 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426627 | orchestrator | 00:01:34.423 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426631 | orchestrator | 00:01:34.423 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426634 | orchestrator | 00:01:34.423 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426638 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426642 | orchestrator | 00:01:34.423 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426646 | orchestrator | 00:01:34.423 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.426652 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426656 | orchestrator | 00:01:34.423 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426660 | orchestrator | 00:01:34.423 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426667 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426671 | orchestrator | 00:01:34.423 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426675 | orchestrator | 00:01:34.423 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.426679 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426682 | orchestrator | 00:01:34.423 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426686 | orchestrator | 00:01:34.423 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426690 | orchestrator | 00:01:34.423 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-27 00:01:34.426694 | orchestrator | 00:01:34.423 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426698 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426701 | orchestrator | 00:01:34.423 STDOUT terraform:  } 2025-07-27 00:01:34.426705 | orchestrator | 00:01:34.423 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-27 00:01:34.426709 | orchestrator | 00:01:34.423 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426713 | orchestrator | 00:01:34.423 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.426717 | orchestrator | 00:01:34.423 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.426720 | orchestrator | 00:01:34.423 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.426724 | orchestrator | 00:01:34.423 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.426728 | orchestrator | 00:01:34.423 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.426731 | orchestrator | 00:01:34.423 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.426735 | orchestrator | 00:01:34.423 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.426739 | orchestrator | 00:01:34.423 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426745 | orchestrator | 00:01:34.423 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426749 | orchestrator | 00:01:34.423 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426753 | orchestrator | 00:01:34.423 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426757 | orchestrator | 00:01:34.423 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426760 | orchestrator | 00:01:34.423 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426764 | orchestrator | 00:01:34.424 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426768 | orchestrator | 00:01:34.424 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426771 | orchestrator | 00:01:34.424 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426775 | orchestrator | 00:01:34.424 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426779 | orchestrator | 00:01:34.424 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426786 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426790 | orchestrator | 00:01:34.424 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426794 | orchestrator | 00:01:34.424 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.426798 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426801 | orchestrator | 00:01:34.424 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426805 | orchestrator | 00:01:34.424 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426812 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426816 | orchestrator | 00:01:34.424 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426819 | orchestrator | 00:01:34.424 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.426823 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426827 | orchestrator | 00:01:34.424 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426831 | orchestrator | 00:01:34.424 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426834 | orchestrator | 00:01:34.424 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-27 00:01:34.426838 | orchestrator | 00:01:34.424 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426842 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426846 | orchestrator | 00:01:34.424 STDOUT terraform:  } 2025-07-27 00:01:34.426849 | orchestrator | 00:01:34.424 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-27 00:01:34.426853 | orchestrator | 00:01:34.424 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426857 | orchestrator | 00:01:34.424 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.426861 | orchestrator | 00:01:34.424 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.426864 | orchestrator | 00:01:34.424 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.426868 | orchestrator | 00:01:34.424 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.426872 | orchestrator | 00:01:34.424 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.426876 | orchestrator | 00:01:34.424 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.426879 | orchestrator | 00:01:34.424 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.426883 | orchestrator | 00:01:34.424 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.426887 | orchestrator | 00:01:34.424 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.426891 | orchestrator | 00:01:34.424 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.426894 | orchestrator | 00:01:34.424 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.426898 | orchestrator | 00:01:34.424 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.426905 | orchestrator | 00:01:34.424 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.426908 | orchestrator | 00:01:34.424 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.426912 | orchestrator | 00:01:34.424 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.426916 | orchestrator | 00:01:34.424 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.426919 | orchestrator | 00:01:34.425 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426923 | orchestrator | 00:01:34.425 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.426927 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426931 | orchestrator | 00:01:34.425 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426935 | orchestrator | 00:01:34.425 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.426938 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426942 | orchestrator | 00:01:34.425 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426946 | orchestrator | 00:01:34.425 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.426950 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426953 | orchestrator | 00:01:34.425 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.426957 | orchestrator | 00:01:34.425 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.426961 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426967 | orchestrator | 00:01:34.425 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.426971 | orchestrator | 00:01:34.425 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.426975 | orchestrator | 00:01:34.425 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-27 00:01:34.426979 | orchestrator | 00:01:34.425 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.426983 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426986 | orchestrator | 00:01:34.425 STDOUT terraform:  } 2025-07-27 00:01:34.426990 | orchestrator | 00:01:34.425 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-27 00:01:34.426994 | orchestrator | 00:01:34.425 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-27 00:01:34.426998 | orchestrator | 00:01:34.425 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.427002 | orchestrator | 00:01:34.425 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-27 00:01:34.427006 | orchestrator | 00:01:34.425 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-27 00:01:34.427009 | orchestrator | 00:01:34.425 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.427013 | orchestrator | 00:01:34.425 STDOUT terraform:  + device_id = (known after apply) 2025-07-27 00:01:34.427017 | orchestrator | 00:01:34.425 STDOUT terraform:  + device_owner = (known after apply) 2025-07-27 00:01:34.427021 | orchestrator | 00:01:34.425 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-27 00:01:34.427031 | orchestrator | 00:01:34.425 STDOUT terraform:  + dns_name = (known after apply) 2025-07-27 00:01:34.427035 | orchestrator | 00:01:34.425 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.427039 | orchestrator | 00:01:34.425 STDOUT terraform:  + mac_address = (known after apply) 2025-07-27 00:01:34.427043 | orchestrator | 00:01:34.425 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.427047 | orchestrator | 00:01:34.425 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-27 00:01:34.427050 | orchestrator | 00:01:34.425 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-27 00:01:34.427057 | orchestrator | 00:01:34.425 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.427061 | orchestrator | 00:01:34.425 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-27 00:01:34.427065 | orchestrator | 00:01:34.425 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.427068 | orchestrator | 00:01:34.426 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.427072 | orchestrator | 00:01:34.426 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-27 00:01:34.427076 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427080 | orchestrator | 00:01:34.426 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.427084 | orchestrator | 00:01:34.426 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-27 00:01:34.427087 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427091 | orchestrator | 00:01:34.426 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.427095 | orchestrator | 00:01:34.426 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-27 00:01:34.427099 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427102 | orchestrator | 00:01:34.426 STDOUT terraform:  + allowed_address_pairs { 2025-07-27 00:01:34.427106 | orchestrator | 00:01:34.426 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-27 00:01:34.427110 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427114 | orchestrator | 00:01:34.426 STDOUT terraform:  + binding (known after apply) 2025-07-27 00:01:34.427118 | orchestrator | 00:01:34.426 STDOUT terraform:  + fixed_ip { 2025-07-27 00:01:34.427121 | orchestrator | 00:01:34.426 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-27 00:01:34.427128 | orchestrator | 00:01:34.426 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.427132 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427136 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427140 | orchestrator | 00:01:34.426 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-27 00:01:34.427143 | orchestrator | 00:01:34.426 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-27 00:01:34.427147 | orchestrator | 00:01:34.426 STDOUT terraform:  + force_destroy = false 2025-07-27 00:01:34.427151 | orchestrator | 00:01:34.426 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.427158 | orchestrator | 00:01:34.426 STDOUT terraform:  + port_id = (known after apply) 2025-07-27 00:01:34.427162 | orchestrator | 00:01:34.426 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.427165 | orchestrator | 00:01:34.426 STDOUT terraform:  + router_id = (known after apply) 2025-07-27 00:01:34.427169 | orchestrator | 00:01:34.426 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-27 00:01:34.427173 | orchestrator | 00:01:34.426 STDOUT terraform:  } 2025-07-27 00:01:34.427176 | orchestrator | 00:01:34.426 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-27 00:01:34.427180 | orchestrator | 00:01:34.426 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-27 00:01:34.427184 | orchestrator | 00:01:34.426 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-27 00:01:34.427188 | orchestrator | 00:01:34.426 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.427192 | orchestrator | 00:01:34.426 STDOUT terraform:  + availability_zone_hints = [ 2025-07-27 00:01:34.427197 | orchestrator | 00:01:34.426 STDOUT terraform:  + "nova", 2025-07-27 00:01:34.427237 | orchestrator | 00:01:34.426 STDOUT terraform:  ] 2025-07-27 00:01:34.427241 | orchestrator | 00:01:34.426 STDOUT terraform:  + distributed = (known after apply) 2025-07-27 00:01:34.427245 | orchestrator | 00:01:34.426 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-27 00:01:34.427249 | orchestrator | 00:01:34.426 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-27 00:01:34.427256 | orchestrator | 00:01:34.426 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-27 00:01:34.427260 | orchestrator | 00:01:34.426 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.427264 | orchestrator | 00:01:34.426 STDOUT terraform:  + name = "testbed" 2025-07-27 00:01:34.427267 | orchestrator | 00:01:34.426 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.427271 | orchestrator | 00:01:34.426 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.427275 | orchestrator | 00:01:34.427 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-27 00:01:34.427279 | orchestrator | 00:01:34.427 STDOUT terraform:  } 2025-07-27 00:01:34.427283 | orchestrator | 00:01:34.427 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-27 00:01:34.427289 | orchestrator | 00:01:34.427 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-27 00:01:34.427293 | orchestrator | 00:01:34.427 STDOUT terraform:  + description = "ssh" 2025-07-27 00:01:34.427297 | orchestrator | 00:01:34.427 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.427301 | orchestrator | 00:01:34.427 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.427305 | orchestrator | 00:01:34.427 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.427309 | orchestrator | 00:01:34.427 STDOUT terraform:  + port_range_max = 22 2025-07-27 00:01:34.427314 | orchestrator | 00:01:34.427 STDOUT terraform:  + port_range_min = 22 2025-07-27 00:01:34.427321 | orchestrator | 00:01:34.427 STDOUT terraform:  + protocol = "tcp" 2025-07-27 00:01:34.427371 | orchestrator | 00:01:34.427 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.427466 | orchestrator | 00:01:34.427 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.427514 | orchestrator | 00:01:34.427 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.427537 | orchestrator | 00:01:34.427 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.427575 | orchestrator | 00:01:34.427 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.427611 | orchestrator | 00:01:34.427 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.427618 | orchestrator | 00:01:34.427 STDOUT terraform:  } 2025-07-27 00:01:34.427684 | orchestrator | 00:01:34.427 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-27 00:01:34.427726 | orchestrator | 00:01:34.427 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-27 00:01:34.427757 | orchestrator | 00:01:34.427 STDOUT terraform:  + description = "wireguard" 2025-07-27 00:01:34.427785 | orchestrator | 00:01:34.427 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.427810 | orchestrator | 00:01:34.427 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.427854 | orchestrator | 00:01:34.427 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.427872 | orchestrator | 00:01:34.427 STDOUT terraform:  + port_range_max = 51820 2025-07-27 00:01:34.427894 | orchestrator | 00:01:34.427 STDOUT terraform:  + port_range_min = 51820 2025-07-27 00:01:34.427918 | orchestrator | 00:01:34.427 STDOUT terraform:  + protocol = "udp" 2025-07-27 00:01:34.427954 | orchestrator | 00:01:34.427 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.427987 | orchestrator | 00:01:34.427 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.428027 | orchestrator | 00:01:34.427 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.428050 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.428101 | orchestrator | 00:01:34.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.428122 | orchestrator | 00:01:34.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.428128 | orchestrator | 00:01:34.428 STDOUT terraform:  } 2025-07-27 00:01:34.428193 | orchestrator | 00:01:34.428 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-27 00:01:34.428249 | orchestrator | 00:01:34.428 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-27 00:01:34.428276 | orchestrator | 00:01:34.428 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.428304 | orchestrator | 00:01:34.428 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.428336 | orchestrator | 00:01:34.428 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.428360 | orchestrator | 00:01:34.428 STDOUT terraform:  + protocol = "tcp" 2025-07-27 00:01:34.428396 | orchestrator | 00:01:34.428 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.428429 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.428465 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.428498 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-27 00:01:34.428542 | orchestrator | 00:01:34.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.428567 | orchestrator | 00:01:34.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.428581 | orchestrator | 00:01:34.428 STDOUT terraform:  } 2025-07-27 00:01:34.428632 | orchestrator | 00:01:34.428 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-27 00:01:34.428693 | orchestrator | 00:01:34.428 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-27 00:01:34.428712 | orchestrator | 00:01:34.428 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.428736 | orchestrator | 00:01:34.428 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.428780 | orchestrator | 00:01:34.428 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.428797 | orchestrator | 00:01:34.428 STDOUT terraform:  + protocol = "udp" 2025-07-27 00:01:34.428832 | orchestrator | 00:01:34.428 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.428875 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.428900 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.428945 | orchestrator | 00:01:34.428 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-27 00:01:34.428972 | orchestrator | 00:01:34.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.429007 | orchestrator | 00:01:34.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.429014 | orchestrator | 00:01:34.429 STDOUT terraform:  } 2025-07-27 00:01:34.429066 | orchestrator | 00:01:34.429 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-27 00:01:34.429128 | orchestrator | 00:01:34.429 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-27 00:01:34.429144 | orchestrator | 00:01:34.429 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.429168 | orchestrator | 00:01:34.429 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.429258 | orchestrator | 00:01:34.429 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.429266 | orchestrator | 00:01:34.429 STDOUT terraform:  + protocol = "icmp" 2025-07-27 00:01:34.429278 | orchestrator | 00:01:34.429 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.429297 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.429338 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.429360 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.429422 | orchestrator | 00:01:34.429 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.429429 | orchestrator | 00:01:34.429 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.429434 | orchestrator | 00:01:34.429 STDOUT terraform:  } 2025-07-27 00:01:34.429484 | orchestrator | 00:01:34.429 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-27 00:01:34.429530 | orchestrator | 00:01:34.429 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-27 00:01:34.429558 | orchestrator | 00:01:34.429 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.429582 | orchestrator | 00:01:34.429 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.429619 | orchestrator | 00:01:34.429 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.429644 | orchestrator | 00:01:34.429 STDOUT terraform:  + protocol = "tcp" 2025-07-27 00:01:34.429689 | orchestrator | 00:01:34.429 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.429716 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.429760 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.429769 | orchestrator | 00:01:34.429 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.429808 | orchestrator | 00:01:34.429 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.429843 | orchestrator | 00:01:34.429 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.429864 | orchestrator | 00:01:34.429 STDOUT terraform:  } 2025-07-27 00:01:34.429906 | orchestrator | 00:01:34.429 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-27 00:01:34.429960 | orchestrator | 00:01:34.429 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-27 00:01:34.429983 | orchestrator | 00:01:34.429 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.430006 | orchestrator | 00:01:34.429 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.430085 | orchestrator | 00:01:34.430 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.430116 | orchestrator | 00:01:34.430 STDOUT terraform:  + protocol = "udp" 2025-07-27 00:01:34.430145 | orchestrator | 00:01:34.430 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.430187 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.430237 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.430274 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.430305 | orchestrator | 00:01:34.430 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.430337 | orchestrator | 00:01:34.430 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.430346 | orchestrator | 00:01:34.430 STDOUT terraform:  } 2025-07-27 00:01:34.430395 | orchestrator | 00:01:34.430 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-27 00:01:34.430452 | orchestrator | 00:01:34.430 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-27 00:01:34.430463 | orchestrator | 00:01:34.430 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.430497 | orchestrator | 00:01:34.430 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.430524 | orchestrator | 00:01:34.430 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.430561 | orchestrator | 00:01:34.430 STDOUT terraform:  + protocol = "icmp" 2025-07-27 00:01:34.430586 | orchestrator | 00:01:34.430 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.430658 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.430665 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.430682 | orchestrator | 00:01:34.430 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.430716 | orchestrator | 00:01:34.430 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.430760 | orchestrator | 00:01:34.430 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.430766 | orchestrator | 00:01:34.430 STDOUT terraform:  } 2025-07-27 00:01:34.430809 | orchestrator | 00:01:34.430 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-27 00:01:34.430858 | orchestrator | 00:01:34.430 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-27 00:01:34.430881 | orchestrator | 00:01:34.430 STDOUT terraform:  + description = "vrrp" 2025-07-27 00:01:34.430918 | orchestrator | 00:01:34.430 STDOUT terraform:  + direction = "ingress" 2025-07-27 00:01:34.430925 | orchestrator | 00:01:34.430 STDOUT terraform:  + ethertype = "IPv4" 2025-07-27 00:01:34.430966 | orchestrator | 00:01:34.430 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.430999 | orchestrator | 00:01:34.430 STDOUT terraform:  + protocol = "112" 2025-07-27 00:01:34.431027 | orchestrator | 00:01:34.430 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.431069 | orchestrator | 00:01:34.431 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-27 00:01:34.431095 | orchestrator | 00:01:34.431 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-27 00:01:34.431122 | orchestrator | 00:01:34.431 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-27 00:01:34.431165 | orchestrator | 00:01:34.431 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-27 00:01:34.431195 | orchestrator | 00:01:34.431 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.431241 | orchestrator | 00:01:34.431 STDOUT terraform:  } 2025-07-27 00:01:34.431282 | orchestrator | 00:01:34.431 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-27 00:01:34.431338 | orchestrator | 00:01:34.431 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-27 00:01:34.431357 | orchestrator | 00:01:34.431 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.431390 | orchestrator | 00:01:34.431 STDOUT terraform:  + description = "management security group" 2025-07-27 00:01:34.431419 | orchestrator | 00:01:34.431 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.431447 | orchestrator | 00:01:34.431 STDOUT terraform:  + name = "testbed-management" 2025-07-27 00:01:34.431475 | orchestrator | 00:01:34.431 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.431503 | orchestrator | 00:01:34.431 STDOUT terraform:  + stateful = (known after apply) 2025-07-27 00:01:34.431530 | orchestrator | 00:01:34.431 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.431536 | orchestrator | 00:01:34.431 STDOUT terraform:  } 2025-07-27 00:01:34.431584 | orchestrator | 00:01:34.431 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-27 00:01:34.431627 | orchestrator | 00:01:34.431 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-27 00:01:34.431661 | orchestrator | 00:01:34.431 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.431683 | orchestrator | 00:01:34.431 STDOUT terraform:  + description = "node security group" 2025-07-27 00:01:34.431723 | orchestrator | 00:01:34.431 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.431741 | orchestrator | 00:01:34.431 STDOUT terraform:  + name = "testbed-node" 2025-07-27 00:01:34.431767 | orchestrator | 00:01:34.431 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.431793 | orchestrator | 00:01:34.431 STDOUT terraform:  + stateful = (known after apply) 2025-07-27 00:01:34.431828 | orchestrator | 00:01:34.431 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.431835 | orchestrator | 00:01:34.431 STDOUT terraform:  } 2025-07-27 00:01:34.431871 | orchestrator | 00:01:34.431 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-27 00:01:34.431915 | orchestrator | 00:01:34.431 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-27 00:01:34.431944 | orchestrator | 00:01:34.431 STDOUT terraform:  + all_tags = (known after apply) 2025-07-27 00:01:34.431981 | orchestrator | 00:01:34.431 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-27 00:01:34.431988 | orchestrator | 00:01:34.431 STDOUT terraform:  + dns_nameservers = [ 2025-07-27 00:01:34.432005 | orchestrator | 00:01:34.431 STDOUT terraform:  + "8.8.8.8", 2025-07-27 00:01:34.432019 | orchestrator | 00:01:34.432 STDOUT terraform:  + "9.9.9.9", 2025-07-27 00:01:34.432034 | orchestrator | 00:01:34.432 STDOUT terraform:  ] 2025-07-27 00:01:34.432047 | orchestrator | 00:01:34.432 STDOUT terraform:  + enable_dhcp = true 2025-07-27 00:01:34.432077 | orchestrator | 00:01:34.432 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-27 00:01:34.432106 | orchestrator | 00:01:34.432 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.432125 | orchestrator | 00:01:34.432 STDOUT terraform:  + ip_version = 4 2025-07-27 00:01:34.432154 | orchestrator | 00:01:34.432 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-27 00:01:34.432183 | orchestrator | 00:01:34.432 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-27 00:01:34.432236 | orchestrator | 00:01:34.432 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-27 00:01:34.432260 | orchestrator | 00:01:34.432 STDOUT terraform:  + network_id = (known after apply) 2025-07-27 00:01:34.432279 | orchestrator | 00:01:34.432 STDOUT terraform:  + no_gateway = false 2025-07-27 00:01:34.432308 | orchestrator | 00:01:34.432 STDOUT terraform:  + region = (known after apply) 2025-07-27 00:01:34.432337 | orchestrator | 00:01:34.432 STDOUT terraform:  + service_types = (known after apply) 2025-07-27 00:01:34.432366 | orchestrator | 00:01:34.432 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-27 00:01:34.432382 | orchestrator | 00:01:34.432 STDOUT terraform:  + allocation_pool { 2025-07-27 00:01:34.432406 | orchestrator | 00:01:34.432 STDOUT terraform:  + end = "192.168.31.250" 2025-07-27 00:01:34.432429 | orchestrator | 00:01:34.432 STDOUT terraform:  + start = "192.168.31.200" 2025-07-27 00:01:34.432446 | orchestrator | 00:01:34.432 STDOUT terraform:  } 2025-07-27 00:01:34.432452 | orchestrator | 00:01:34.432 STDOUT terraform:  } 2025-07-27 00:01:34.432478 | orchestrator | 00:01:34.432 STDOUT terraform:  # terraform_data.image will be created 2025-07-27 00:01:34.432501 | orchestrator | 00:01:34.432 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-27 00:01:34.432525 | orchestrator | 00:01:34.432 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.432544 | orchestrator | 00:01:34.432 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-27 00:01:34.432566 | orchestrator | 00:01:34.432 STDOUT terraform:  + output = (known after apply) 2025-07-27 00:01:34.432572 | orchestrator | 00:01:34.432 STDOUT terraform:  } 2025-07-27 00:01:34.432609 | orchestrator | 00:01:34.432 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-27 00:01:34.432628 | orchestrator | 00:01:34.432 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-27 00:01:34.432650 | orchestrator | 00:01:34.432 STDOUT terraform:  + id = (known after apply) 2025-07-27 00:01:34.432669 | orchestrator | 00:01:34.432 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-27 00:01:34.432692 | orchestrator | 00:01:34.432 STDOUT terraform:  + output = (known after apply) 2025-07-27 00:01:34.432706 | orchestrator | 00:01:34.432 STDOUT terraform:  } 2025-07-27 00:01:34.432733 | orchestrator | 00:01:34.432 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-27 00:01:34.432746 | orchestrator | 00:01:34.432 STDOUT terraform: Changes to Outputs: 2025-07-27 00:01:34.432770 | orchestrator | 00:01:34.432 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-27 00:01:34.432793 | orchestrator | 00:01:34.432 STDOUT terraform:  + private_key = (sensitive value) 2025-07-27 00:01:34.649555 | orchestrator | 00:01:34.649 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-27 00:01:34.649674 | orchestrator | 00:01:34.649 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=b6fde427-decf-bf17-0b76-bdcc5633fda1] 2025-07-27 00:01:34.649960 | orchestrator | 00:01:34.649 STDOUT terraform: terraform_data.image: Creating... 2025-07-27 00:01:34.658162 | orchestrator | 00:01:34.657 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=a957b3b8-7d0d-6d6e-2b38-957b47fa6501] 2025-07-27 00:01:34.675371 | orchestrator | 00:01:34.674 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-27 00:01:34.676789 | orchestrator | 00:01:34.676 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-27 00:01:34.695167 | orchestrator | 00:01:34.694 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-27 00:01:34.697666 | orchestrator | 00:01:34.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-27 00:01:34.698478 | orchestrator | 00:01:34.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-27 00:01:34.698768 | orchestrator | 00:01:34.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-27 00:01:34.699358 | orchestrator | 00:01:34.699 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-27 00:01:34.699871 | orchestrator | 00:01:34.699 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-27 00:01:34.700169 | orchestrator | 00:01:34.700 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-27 00:01:34.700869 | orchestrator | 00:01:34.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-27 00:01:35.183199 | orchestrator | 00:01:35.182 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-27 00:01:35.183332 | orchestrator | 00:01:35.183 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-27 00:01:35.186596 | orchestrator | 00:01:35.186 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-27 00:01:35.187010 | orchestrator | 00:01:35.186 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-27 00:01:35.257055 | orchestrator | 00:01:35.256 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-27 00:01:35.262325 | orchestrator | 00:01:35.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-27 00:01:35.730186 | orchestrator | 00:01:35.729 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=a656625a-d967-480f-8810-b7abc26e42b9] 2025-07-27 00:01:35.736439 | orchestrator | 00:01:35.736 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-27 00:01:38.287930 | orchestrator | 00:01:38.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3f16037d-7e7a-4d1c-8a08-a73ad765cc6a] 2025-07-27 00:01:38.293449 | orchestrator | 00:01:38.293 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-27 00:01:38.321261 | orchestrator | 00:01:38.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=c98e51bc-91d6-4ddc-9065-85c0c9862500] 2025-07-27 00:01:38.343786 | orchestrator | 00:01:38.341 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-27 00:01:38.344619 | orchestrator | 00:01:38.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=9dfacb5e-e180-4af8-a7c9-16d6f43faaeb] 2025-07-27 00:01:38.348392 | orchestrator | 00:01:38.346 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=be80a4a306c0b8e82438177c754bf06044a77b9a] 2025-07-27 00:01:38.351725 | orchestrator | 00:01:38.351 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-27 00:01:38.354068 | orchestrator | 00:01:38.353 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-27 00:01:38.361089 | orchestrator | 00:01:38.360 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=2bb1f534b1f1e66715039c59e7fe3d84cb9859ae] 2025-07-27 00:01:38.365882 | orchestrator | 00:01:38.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-27 00:01:38.369806 | orchestrator | 00:01:38.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=2e146b97-b92c-4d75-a47a-cbb87d521c9d] 2025-07-27 00:01:38.373835 | orchestrator | 00:01:38.373 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=5e7493cc-7dc8-467a-98f7-475ea6c16568] 2025-07-27 00:01:38.378247 | orchestrator | 00:01:38.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=dc9eac8a-9f48-43ff-b485-8b8953fa59b9] 2025-07-27 00:01:38.383870 | orchestrator | 00:01:38.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-27 00:01:38.386043 | orchestrator | 00:01:38.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-27 00:01:38.387204 | orchestrator | 00:01:38.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-27 00:01:38.441572 | orchestrator | 00:01:38.438 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=548e88f5-d1e4-47ee-bc67-21c274497cc2] 2025-07-27 00:01:38.445124 | orchestrator | 00:01:38.444 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-27 00:01:38.462771 | orchestrator | 00:01:38.462 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=38849156-f389-4fad-a260-ba1f11a4ab6e] 2025-07-27 00:01:38.475102 | orchestrator | 00:01:38.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=2c9a9e55-6fa2-466c-9c9b-4054ef935a26] 2025-07-27 00:01:39.078504 | orchestrator | 00:01:39.078 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=60696cc9-3f39-4c12-b330-bab58a7b885e] 2025-07-27 00:01:39.393885 | orchestrator | 00:01:39.393 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=be867b12-9058-43c6-ad48-03b74ce5161f] 2025-07-27 00:01:39.401026 | orchestrator | 00:01:39.400 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-27 00:01:41.669601 | orchestrator | 00:01:41.669 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d989b6d8-137e-4338-b37b-607aeece8d9f] 2025-07-27 00:01:41.730901 | orchestrator | 00:01:41.730 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=90347f57-4e1b-4c22-aefb-d25b8ae3ed98] 2025-07-27 00:01:41.755665 | orchestrator | 00:01:41.755 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=2f26573d-c6f5-4551-a85e-c40c356a8298] 2025-07-27 00:01:41.782565 | orchestrator | 00:01:41.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=770cb937-1532-47c7-aee4-e0102b08a0e1] 2025-07-27 00:01:41.841576 | orchestrator | 00:01:41.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=95ed877f-8d4e-4e70-8b86-df064da55312] 2025-07-27 00:01:41.855616 | orchestrator | 00:01:41.855 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=4680cdc6-b067-43a9-b83f-19d30c1bed2c] 2025-07-27 00:01:42.633597 | orchestrator | 00:01:42.633 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=64cce7ba-8be1-4d31-9f2e-76a72c1dda1a] 2025-07-27 00:01:42.641108 | orchestrator | 00:01:42.640 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-27 00:01:42.642663 | orchestrator | 00:01:42.642 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-27 00:01:42.647641 | orchestrator | 00:01:42.647 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-27 00:01:42.844594 | orchestrator | 00:01:42.844 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f80d18fa-3d1b-4d65-91ee-8c919370feb2] 2025-07-27 00:01:42.855073 | orchestrator | 00:01:42.854 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-27 00:01:42.857645 | orchestrator | 00:01:42.857 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-27 00:01:42.866095 | orchestrator | 00:01:42.865 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-27 00:01:42.872798 | orchestrator | 00:01:42.872 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-27 00:01:42.872831 | orchestrator | 00:01:42.872 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-27 00:01:42.872836 | orchestrator | 00:01:42.872 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-27 00:01:42.872841 | orchestrator | 00:01:42.872 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-27 00:01:42.877923 | orchestrator | 00:01:42.877 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-27 00:01:42.912849 | orchestrator | 00:01:42.912 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=540c3d22-3eb7-4a9f-afc3-fe197389b671] 2025-07-27 00:01:42.923057 | orchestrator | 00:01:42.922 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-27 00:01:43.021773 | orchestrator | 00:01:43.021 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=12320aaf-6147-4d33-9a66-b7be95e8c3ae] 2025-07-27 00:01:43.028500 | orchestrator | 00:01:43.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-27 00:01:43.212309 | orchestrator | 00:01:43.211 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3d4d85aa-70e4-4680-8e9d-95d8dca27daa] 2025-07-27 00:01:43.220846 | orchestrator | 00:01:43.220 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-27 00:01:43.364991 | orchestrator | 00:01:43.364 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=1cfbedcd-5812-49d3-b1b1-304a1d3e0091] 2025-07-27 00:01:43.371889 | orchestrator | 00:01:43.371 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-27 00:01:43.459697 | orchestrator | 00:01:43.459 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=0a7410dd-8d5d-4c9e-bd55-f647332ccdb6] 2025-07-27 00:01:43.472915 | orchestrator | 00:01:43.472 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-27 00:01:43.541996 | orchestrator | 00:01:43.541 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=919dfe01-42d2-4bb5-855c-9433869281f3] 2025-07-27 00:01:43.547532 | orchestrator | 00:01:43.547 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-27 00:01:43.582398 | orchestrator | 00:01:43.582 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=04e3087e-1ad2-4e03-8abf-4f8318afa926] 2025-07-27 00:01:43.589011 | orchestrator | 00:01:43.588 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-27 00:01:43.644599 | orchestrator | 00:01:43.644 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=218de33b-048d-412f-a98a-8881d2e6631e] 2025-07-27 00:01:43.651079 | orchestrator | 00:01:43.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-27 00:01:43.703399 | orchestrator | 00:01:43.703 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6dc673d5-aa14-4dd6-9879-f5a861bfd7c7] 2025-07-27 00:01:43.874198 | orchestrator | 00:01:43.873 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=6dccca4b-5a56-4b57-a2d2-c71b1b98d10b] 2025-07-27 00:01:43.937650 | orchestrator | 00:01:43.937 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=4bddd341-3907-4051-a9c1-91d8305185b2] 2025-07-27 00:01:44.044162 | orchestrator | 00:01:44.043 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1bca0290-746a-49bd-814f-9415c5f2297c] 2025-07-27 00:01:44.064315 | orchestrator | 00:01:44.064 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=b6c8acec-e692-4b43-b3ff-fe2460ed92f7] 2025-07-27 00:01:44.119750 | orchestrator | 00:01:44.119 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=2c4d596d-ca33-4807-8d7e-3cd1168ed877] 2025-07-27 00:01:44.376026 | orchestrator | 00:01:44.375 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=8b2c3cf5-db15-481e-ba5f-69af5efec822] 2025-07-27 00:01:44.408650 | orchestrator | 00:01:44.408 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=0ae57d18-17c3-4c5b-8236-a51edf7e745c] 2025-07-27 00:01:44.569774 | orchestrator | 00:01:44.569 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=dcc6c575-0de8-4f88-94aa-00c513574093] 2025-07-27 00:01:46.073994 | orchestrator | 00:01:46.073 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=971ad6c0-baee-4ac9-b542-4593c923730a] 2025-07-27 00:01:46.727811 | orchestrator | 00:01:46.095 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-27 00:01:46.727906 | orchestrator | 00:01:46.107 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-27 00:01:46.727924 | orchestrator | 00:01:46.121 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-27 00:01:46.727936 | orchestrator | 00:01:46.128 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-27 00:01:46.727946 | orchestrator | 00:01:46.129 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-27 00:01:46.727957 | orchestrator | 00:01:46.138 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-27 00:01:46.727968 | orchestrator | 00:01:46.138 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-27 00:01:47.809937 | orchestrator | 00:01:47.809 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=848f0403-9778-4678-90b2-201ac5e462dd] 2025-07-27 00:01:47.823333 | orchestrator | 00:01:47.823 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-27 00:01:47.823916 | orchestrator | 00:01:47.823 STDOUT terraform: local_file.inventory: Creating... 2025-07-27 00:01:47.827739 | orchestrator | 00:01:47.827 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-27 00:01:47.830513 | orchestrator | 00:01:47.830 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=41ee9071bae7b16934780421a95d927771d0f052] 2025-07-27 00:01:47.834978 | orchestrator | 00:01:47.834 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b4ce328fc7308219dd41f49f69c9363473681769] 2025-07-27 00:01:49.179479 | orchestrator | 00:01:49.179 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=848f0403-9778-4678-90b2-201ac5e462dd] 2025-07-27 00:01:56.112164 | orchestrator | 00:01:56.111 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-27 00:01:56.124352 | orchestrator | 00:01:56.124 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-27 00:01:56.129664 | orchestrator | 00:01:56.129 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-27 00:01:56.130936 | orchestrator | 00:01:56.130 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-27 00:01:56.143275 | orchestrator | 00:01:56.142 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-27 00:01:56.143430 | orchestrator | 00:01:56.143 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-27 00:02:06.115814 | orchestrator | 00:02:06.115 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-27 00:02:06.125089 | orchestrator | 00:02:06.124 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-27 00:02:06.130454 | orchestrator | 00:02:06.130 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-27 00:02:06.131602 | orchestrator | 00:02:06.131 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-27 00:02:06.144040 | orchestrator | 00:02:06.143 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-27 00:02:06.144130 | orchestrator | 00:02:06.143 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-27 00:02:06.698938 | orchestrator | 00:02:06.698 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=2a71942e-db9f-4067-b862-ea0207494abd] 2025-07-27 00:02:16.116068 | orchestrator | 00:02:16.115 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-27 00:02:16.125172 | orchestrator | 00:02:16.125 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-27 00:02:16.131565 | orchestrator | 00:02:16.131 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-27 00:02:16.132674 | orchestrator | 00:02:16.132 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-27 00:02:16.144148 | orchestrator | 00:02:16.143 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-27 00:02:16.803462 | orchestrator | 00:02:16.803 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=8440dbcd-0df8-4c21-b601-428a796a9fd8] 2025-07-27 00:02:16.835633 | orchestrator | 00:02:16.835 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=ebf8a9cf-ae20-4386-8140-0578ead1265c] 2025-07-27 00:02:16.936530 | orchestrator | 00:02:16.936 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=c4b8932b-7971-47f7-b500-76c028ef4028] 2025-07-27 00:02:17.143371 | orchestrator | 00:02:17.142 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=bea3289c-0569-4afa-8e88-20e7fe4474b4] 2025-07-27 00:02:17.362673 | orchestrator | 00:02:17.362 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=d1e24fc5-ae25-4f41-9d4a-2e1e10d8e91a] 2025-07-27 00:02:17.388041 | orchestrator | 00:02:17.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-27 00:02:17.390929 | orchestrator | 00:02:17.390 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-27 00:02:17.397603 | orchestrator | 00:02:17.397 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-27 00:02:17.399226 | orchestrator | 00:02:17.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-27 00:02:17.402659 | orchestrator | 00:02:17.402 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-27 00:02:17.404820 | orchestrator | 00:02:17.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-27 00:02:17.411217 | orchestrator | 00:02:17.411 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-27 00:02:17.411616 | orchestrator | 00:02:17.411 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1606231213906398365] 2025-07-27 00:02:17.419016 | orchestrator | 00:02:17.418 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-27 00:02:17.419277 | orchestrator | 00:02:17.419 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-27 00:02:17.430167 | orchestrator | 00:02:17.429 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-27 00:02:17.451343 | orchestrator | 00:02:17.451 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-27 00:02:20.776027 | orchestrator | 00:02:20.775 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=bea3289c-0569-4afa-8e88-20e7fe4474b4/3f16037d-7e7a-4d1c-8a08-a73ad765cc6a] 2025-07-27 00:02:20.785083 | orchestrator | 00:02:20.784 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d1e24fc5-ae25-4f41-9d4a-2e1e10d8e91a/dc9eac8a-9f48-43ff-b485-8b8953fa59b9] 2025-07-27 00:02:20.807959 | orchestrator | 00:02:20.807 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=c4b8932b-7971-47f7-b500-76c028ef4028/2e146b97-b92c-4d75-a47a-cbb87d521c9d] 2025-07-27 00:02:20.821757 | orchestrator | 00:02:20.821 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=bea3289c-0569-4afa-8e88-20e7fe4474b4/38849156-f389-4fad-a260-ba1f11a4ab6e] 2025-07-27 00:02:20.853106 | orchestrator | 00:02:20.852 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=c4b8932b-7971-47f7-b500-76c028ef4028/548e88f5-d1e4-47ee-bc67-21c274497cc2] 2025-07-27 00:02:26.905695 | orchestrator | 00:02:26.905 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=d1e24fc5-ae25-4f41-9d4a-2e1e10d8e91a/9dfacb5e-e180-4af8-a7c9-16d6f43faaeb] 2025-07-27 00:02:26.933540 | orchestrator | 00:02:26.933 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=bea3289c-0569-4afa-8e88-20e7fe4474b4/c98e51bc-91d6-4ddc-9065-85c0c9862500] 2025-07-27 00:02:26.935633 | orchestrator | 00:02:26.935 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=d1e24fc5-ae25-4f41-9d4a-2e1e10d8e91a/2c9a9e55-6fa2-466c-9c9b-4054ef935a26] 2025-07-27 00:02:26.963368 | orchestrator | 00:02:26.962 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=c4b8932b-7971-47f7-b500-76c028ef4028/5e7493cc-7dc8-467a-98f7-475ea6c16568] 2025-07-27 00:02:27.452747 | orchestrator | 00:02:27.452 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-27 00:02:37.457787 | orchestrator | 00:02:37.457 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-27 00:02:38.362877 | orchestrator | 00:02:38.362 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=fab1d95f-6847-4d5c-8a44-8bf4f1c0fef3] 2025-07-27 00:02:38.383704 | orchestrator | 00:02:38.383 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-27 00:02:38.383797 | orchestrator | 00:02:38.383 STDOUT terraform: Outputs: 2025-07-27 00:02:38.383815 | orchestrator | 00:02:38.383 STDOUT terraform: manager_address = 2025-07-27 00:02:38.383878 | orchestrator | 00:02:38.383 STDOUT terraform: private_key = 2025-07-27 00:02:38.774029 | orchestrator | ok: Runtime: 0:01:09.795220 2025-07-27 00:02:38.809897 | 2025-07-27 00:02:38.810033 | TASK [Create infrastructure (stable)] 2025-07-27 00:02:39.344502 | orchestrator | skipping: Conditional result was False 2025-07-27 00:02:39.362563 | 2025-07-27 00:02:39.362762 | TASK [Fetch manager address] 2025-07-27 00:02:39.805725 | orchestrator | ok 2025-07-27 00:02:39.815380 | 2025-07-27 00:02:39.815530 | TASK [Set manager_host address] 2025-07-27 00:02:39.898503 | orchestrator | ok 2025-07-27 00:02:39.909012 | 2025-07-27 00:02:39.909337 | LOOP [Update ansible collections] 2025-07-27 00:02:41.792350 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-27 00:02:41.792868 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-27 00:02:41.792940 | orchestrator | Starting galaxy collection install process 2025-07-27 00:02:41.792982 | orchestrator | Process install dependency map 2025-07-27 00:02:41.793018 | orchestrator | Starting collection install process 2025-07-27 00:02:41.793053 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-07-27 00:02:41.793091 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-07-27 00:02:41.793131 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-27 00:02:41.793217 | orchestrator | ok: Item: commons Runtime: 0:00:01.523661 2025-07-27 00:02:42.714272 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-27 00:02:42.714669 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-27 00:02:42.714779 | orchestrator | Starting galaxy collection install process 2025-07-27 00:02:42.714884 | orchestrator | Process install dependency map 2025-07-27 00:02:42.714985 | orchestrator | Starting collection install process 2025-07-27 00:02:42.715043 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-07-27 00:02:42.715101 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-07-27 00:02:42.715155 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-27 00:02:42.715240 | orchestrator | ok: Item: services Runtime: 0:00:00.654856 2025-07-27 00:02:42.742985 | 2025-07-27 00:02:42.743152 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-27 00:02:53.313313 | orchestrator | ok 2025-07-27 00:02:53.324673 | 2025-07-27 00:02:53.324796 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-27 00:03:53.363307 | orchestrator | ok 2025-07-27 00:03:53.373595 | 2025-07-27 00:03:53.373731 | TASK [Fetch manager ssh hostkey] 2025-07-27 00:03:54.957293 | orchestrator | Output suppressed because no_log was given 2025-07-27 00:03:54.971861 | 2025-07-27 00:03:54.972011 | TASK [Get ssh keypair from terraform environment] 2025-07-27 00:03:55.508148 | orchestrator | ok: Runtime: 0:00:00.010724 2025-07-27 00:03:55.527566 | 2025-07-27 00:03:55.527798 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-27 00:03:55.573057 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-27 00:03:55.582677 | 2025-07-27 00:03:55.582815 | TASK [Run manager part 0] 2025-07-27 00:03:56.928093 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-27 00:03:57.085428 | orchestrator | 2025-07-27 00:03:57.085483 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-27 00:03:57.085492 | orchestrator | 2025-07-27 00:03:57.085509 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-27 00:03:59.017902 | orchestrator | ok: [testbed-manager] 2025-07-27 00:03:59.017945 | orchestrator | 2025-07-27 00:03:59.017965 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-27 00:03:59.017975 | orchestrator | 2025-07-27 00:03:59.017983 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:04:00.919085 | orchestrator | ok: [testbed-manager] 2025-07-27 00:04:00.919181 | orchestrator | 2025-07-27 00:04:00.919200 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-27 00:04:01.619624 | orchestrator | ok: [testbed-manager] 2025-07-27 00:04:01.619680 | orchestrator | 2025-07-27 00:04:01.619689 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-27 00:04:01.672715 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.672769 | orchestrator | 2025-07-27 00:04:01.672779 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-27 00:04:01.701044 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.701079 | orchestrator | 2025-07-27 00:04:01.701086 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-27 00:04:01.729213 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.729250 | orchestrator | 2025-07-27 00:04:01.729255 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-27 00:04:01.765340 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.765397 | orchestrator | 2025-07-27 00:04:01.765402 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-27 00:04:01.797208 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.797252 | orchestrator | 2025-07-27 00:04:01.797260 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-27 00:04:01.835406 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.835447 | orchestrator | 2025-07-27 00:04:01.835453 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-27 00:04:01.862368 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:04:01.862398 | orchestrator | 2025-07-27 00:04:01.862404 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-27 00:04:02.585490 | orchestrator | changed: [testbed-manager] 2025-07-27 00:04:02.585537 | orchestrator | 2025-07-27 00:04:02.585545 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-27 00:06:27.832434 | orchestrator | changed: [testbed-manager] 2025-07-27 00:06:27.832552 | orchestrator | 2025-07-27 00:06:27.832572 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-27 00:08:02.944181 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:02.944294 | orchestrator | 2025-07-27 00:08:02.944318 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-27 00:08:28.329638 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:28.329723 | orchestrator | 2025-07-27 00:08:28.329738 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-27 00:08:39.227981 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:39.228077 | orchestrator | 2025-07-27 00:08:39.228093 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-27 00:08:39.274812 | orchestrator | ok: [testbed-manager] 2025-07-27 00:08:39.274888 | orchestrator | 2025-07-27 00:08:39.274903 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-27 00:08:40.102994 | orchestrator | ok: [testbed-manager] 2025-07-27 00:08:40.103081 | orchestrator | 2025-07-27 00:08:40.103099 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-27 00:08:40.852613 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:40.852713 | orchestrator | 2025-07-27 00:08:40.852741 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-27 00:08:47.373326 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:47.373417 | orchestrator | 2025-07-27 00:08:47.373459 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-27 00:08:53.641146 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:53.641309 | orchestrator | 2025-07-27 00:08:53.641328 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-27 00:08:56.413736 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:56.413780 | orchestrator | 2025-07-27 00:08:56.413788 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-27 00:08:58.219623 | orchestrator | changed: [testbed-manager] 2025-07-27 00:08:58.219712 | orchestrator | 2025-07-27 00:08:58.219728 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-27 00:08:59.361625 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-27 00:08:59.361715 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-27 00:08:59.361729 | orchestrator | 2025-07-27 00:08:59.361742 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-27 00:08:59.404518 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-27 00:08:59.404590 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-27 00:08:59.404598 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-27 00:08:59.404604 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-27 00:09:05.595278 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-27 00:09:05.595349 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-27 00:09:05.595359 | orchestrator | 2025-07-27 00:09:05.595367 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-27 00:09:06.193943 | orchestrator | changed: [testbed-manager] 2025-07-27 00:09:06.194172 | orchestrator | 2025-07-27 00:09:06.194196 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-27 00:12:26.527459 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-27 00:12:26.527521 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-27 00:12:26.527533 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-27 00:12:26.527540 | orchestrator | 2025-07-27 00:12:26.527548 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-27 00:12:28.848070 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-27 00:12:28.848103 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-27 00:12:28.848108 | orchestrator | 2025-07-27 00:12:28.848113 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-27 00:12:28.848118 | orchestrator | 2025-07-27 00:12:28.848122 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:12:30.264097 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:30.264148 | orchestrator | 2025-07-27 00:12:30.264158 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-27 00:12:30.313427 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:30.313470 | orchestrator | 2025-07-27 00:12:30.313478 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-27 00:12:30.383606 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:30.383669 | orchestrator | 2025-07-27 00:12:30.383678 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-27 00:12:31.131814 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:31.131853 | orchestrator | 2025-07-27 00:12:31.131861 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-27 00:12:31.883156 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:31.883246 | orchestrator | 2025-07-27 00:12:31.883265 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-27 00:12:33.246595 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-27 00:12:33.246634 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-27 00:12:33.246641 | orchestrator | 2025-07-27 00:12:33.246699 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-27 00:12:34.559209 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:34.559317 | orchestrator | 2025-07-27 00:12:34.559331 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-27 00:12:36.288863 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-27 00:12:36.288907 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-27 00:12:36.288915 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-27 00:12:36.288922 | orchestrator | 2025-07-27 00:12:36.288929 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-27 00:12:36.336160 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:36.336201 | orchestrator | 2025-07-27 00:12:36.336209 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-27 00:12:36.911320 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:36.911417 | orchestrator | 2025-07-27 00:12:36.911443 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-27 00:12:36.985327 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:36.985381 | orchestrator | 2025-07-27 00:12:36.985387 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-27 00:12:37.823634 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-27 00:12:37.823722 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:37.823728 | orchestrator | 2025-07-27 00:12:37.823733 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-27 00:12:37.853598 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:37.853640 | orchestrator | 2025-07-27 00:12:37.853668 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-27 00:12:37.876756 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:37.876790 | orchestrator | 2025-07-27 00:12:37.876795 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-27 00:12:37.914886 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:37.914926 | orchestrator | 2025-07-27 00:12:37.914933 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-27 00:12:37.968946 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:37.968987 | orchestrator | 2025-07-27 00:12:37.968998 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-27 00:12:38.710338 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:38.710374 | orchestrator | 2025-07-27 00:12:38.710380 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-27 00:12:38.710385 | orchestrator | 2025-07-27 00:12:38.710389 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:12:40.125222 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:40.125255 | orchestrator | 2025-07-27 00:12:40.125261 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-27 00:12:41.118362 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:41.118396 | orchestrator | 2025-07-27 00:12:41.118402 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:12:41.118407 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-27 00:12:41.118412 | orchestrator | 2025-07-27 00:12:41.440958 | orchestrator | ok: Runtime: 0:08:45.289484 2025-07-27 00:12:41.458005 | 2025-07-27 00:12:41.458143 | TASK [Point out that the log in on the manager is now possible] 2025-07-27 00:12:41.496405 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-27 00:12:41.505912 | 2025-07-27 00:12:41.506027 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-27 00:12:41.539707 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-27 00:12:41.548086 | 2025-07-27 00:12:41.548219 | TASK [Run manager part 1 + 2] 2025-07-27 00:12:42.455753 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-27 00:12:42.516093 | orchestrator | 2025-07-27 00:12:42.516142 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-27 00:12:42.516150 | orchestrator | 2025-07-27 00:12:42.516163 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:12:45.553401 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:45.553455 | orchestrator | 2025-07-27 00:12:45.553473 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-27 00:12:45.590043 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:45.590087 | orchestrator | 2025-07-27 00:12:45.590096 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-27 00:12:45.622198 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:45.622246 | orchestrator | 2025-07-27 00:12:45.622253 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-27 00:12:45.675116 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:45.675176 | orchestrator | 2025-07-27 00:12:45.675185 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-27 00:12:45.739922 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:45.739979 | orchestrator | 2025-07-27 00:12:45.739990 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-27 00:12:45.811240 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:45.811294 | orchestrator | 2025-07-27 00:12:45.811304 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-27 00:12:45.859211 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-27 00:12:45.859262 | orchestrator | 2025-07-27 00:12:45.859269 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-27 00:12:46.580053 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:46.580114 | orchestrator | 2025-07-27 00:12:46.580126 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-27 00:12:46.629499 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:12:46.629550 | orchestrator | 2025-07-27 00:12:46.629559 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-27 00:12:48.041093 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:48.041160 | orchestrator | 2025-07-27 00:12:48.041173 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-27 00:12:48.624527 | orchestrator | ok: [testbed-manager] 2025-07-27 00:12:48.624574 | orchestrator | 2025-07-27 00:12:48.624579 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-27 00:12:49.801034 | orchestrator | changed: [testbed-manager] 2025-07-27 00:12:49.801098 | orchestrator | 2025-07-27 00:12:49.801116 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-27 00:13:04.779473 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:04.779755 | orchestrator | 2025-07-27 00:13:04.779795 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-27 00:13:05.448989 | orchestrator | ok: [testbed-manager] 2025-07-27 00:13:05.449081 | orchestrator | 2025-07-27 00:13:05.449099 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-27 00:13:05.503830 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:13:05.503888 | orchestrator | 2025-07-27 00:13:05.503896 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-27 00:13:06.481963 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:06.482077 | orchestrator | 2025-07-27 00:13:06.482095 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-27 00:13:07.465031 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:07.465123 | orchestrator | 2025-07-27 00:13:07.465138 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-27 00:13:08.046681 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:08.046767 | orchestrator | 2025-07-27 00:13:08.046780 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-27 00:13:08.093280 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-27 00:13:08.093382 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-27 00:13:08.093397 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-27 00:13:08.093409 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-27 00:13:11.068990 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:11.069064 | orchestrator | 2025-07-27 00:13:11.069073 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-27 00:13:20.098395 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-27 00:13:20.098501 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-27 00:13:20.098519 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-27 00:13:20.098532 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-27 00:13:20.098552 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-27 00:13:20.098563 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-27 00:13:20.098575 | orchestrator | 2025-07-27 00:13:20.098587 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-27 00:13:21.283589 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:21.283631 | orchestrator | 2025-07-27 00:13:21.283639 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-27 00:13:21.327256 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:13:21.327296 | orchestrator | 2025-07-27 00:13:21.327304 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-27 00:13:24.505819 | orchestrator | changed: [testbed-manager] 2025-07-27 00:13:24.505917 | orchestrator | 2025-07-27 00:13:24.505933 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-27 00:13:24.549687 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:13:24.549779 | orchestrator | 2025-07-27 00:13:24.549795 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-27 00:15:02.745158 | orchestrator | changed: [testbed-manager] 2025-07-27 00:15:02.745199 | orchestrator | 2025-07-27 00:15:02.745206 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-27 00:15:03.913642 | orchestrator | ok: [testbed-manager] 2025-07-27 00:15:03.913758 | orchestrator | 2025-07-27 00:15:03.913776 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:15:03.913789 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-27 00:15:03.913801 | orchestrator | 2025-07-27 00:15:04.176984 | orchestrator | ok: Runtime: 0:02:22.170260 2025-07-27 00:15:04.195092 | 2025-07-27 00:15:04.195230 | TASK [Reboot manager] 2025-07-27 00:15:05.731940 | orchestrator | ok: Runtime: 0:00:01.020933 2025-07-27 00:15:05.750995 | 2025-07-27 00:15:05.751169 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-27 00:15:22.162185 | orchestrator | ok 2025-07-27 00:15:22.173799 | 2025-07-27 00:15:22.173932 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-27 00:16:22.215523 | orchestrator | ok 2025-07-27 00:16:22.225487 | 2025-07-27 00:16:22.225609 | TASK [Deploy manager + bootstrap nodes] 2025-07-27 00:16:24.928743 | orchestrator | 2025-07-27 00:16:24.928957 | orchestrator | # DEPLOY MANAGER 2025-07-27 00:16:24.928984 | orchestrator | 2025-07-27 00:16:24.928999 | orchestrator | + set -e 2025-07-27 00:16:24.929013 | orchestrator | + echo 2025-07-27 00:16:24.929027 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-27 00:16:24.929044 | orchestrator | + echo 2025-07-27 00:16:24.929094 | orchestrator | + cat /opt/manager-vars.sh 2025-07-27 00:16:24.933354 | orchestrator | export NUMBER_OF_NODES=6 2025-07-27 00:16:24.933397 | orchestrator | 2025-07-27 00:16:24.933410 | orchestrator | export CEPH_VERSION=reef 2025-07-27 00:16:24.933423 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-27 00:16:24.933436 | orchestrator | export MANAGER_VERSION=latest 2025-07-27 00:16:24.933460 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-27 00:16:24.933472 | orchestrator | 2025-07-27 00:16:24.933490 | orchestrator | export ARA=false 2025-07-27 00:16:24.933502 | orchestrator | export DEPLOY_MODE=manager 2025-07-27 00:16:24.933519 | orchestrator | export TEMPEST=true 2025-07-27 00:16:24.933531 | orchestrator | export IS_ZUUL=true 2025-07-27 00:16:24.933542 | orchestrator | 2025-07-27 00:16:24.933560 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2025-07-27 00:16:24.933572 | orchestrator | export EXTERNAL_API=false 2025-07-27 00:16:24.933583 | orchestrator | 2025-07-27 00:16:24.933594 | orchestrator | export IMAGE_USER=ubuntu 2025-07-27 00:16:24.933608 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-27 00:16:24.933619 | orchestrator | 2025-07-27 00:16:24.933630 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-27 00:16:24.933649 | orchestrator | 2025-07-27 00:16:24.933660 | orchestrator | + echo 2025-07-27 00:16:24.933673 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-27 00:16:24.934606 | orchestrator | ++ export INTERACTIVE=false 2025-07-27 00:16:24.934625 | orchestrator | ++ INTERACTIVE=false 2025-07-27 00:16:24.934639 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-27 00:16:24.934652 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-27 00:16:24.934848 | orchestrator | + source /opt/manager-vars.sh 2025-07-27 00:16:24.934865 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-27 00:16:24.934879 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-27 00:16:24.934998 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-27 00:16:24.935013 | orchestrator | ++ CEPH_VERSION=reef 2025-07-27 00:16:24.935025 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-27 00:16:24.935036 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-27 00:16:24.935047 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-27 00:16:24.935058 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-27 00:16:24.935068 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-27 00:16:24.935104 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-27 00:16:24.935126 | orchestrator | ++ export ARA=false 2025-07-27 00:16:24.935137 | orchestrator | ++ ARA=false 2025-07-27 00:16:24.935148 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-27 00:16:24.935159 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-27 00:16:24.935170 | orchestrator | ++ export TEMPEST=true 2025-07-27 00:16:24.935181 | orchestrator | ++ TEMPEST=true 2025-07-27 00:16:24.935192 | orchestrator | ++ export IS_ZUUL=true 2025-07-27 00:16:24.935203 | orchestrator | ++ IS_ZUUL=true 2025-07-27 00:16:24.935218 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2025-07-27 00:16:24.935229 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2025-07-27 00:16:24.935240 | orchestrator | ++ export EXTERNAL_API=false 2025-07-27 00:16:24.935251 | orchestrator | ++ EXTERNAL_API=false 2025-07-27 00:16:24.935262 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-27 00:16:24.935273 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-27 00:16:24.935284 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-27 00:16:24.935295 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-27 00:16:24.935306 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-27 00:16:24.935317 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-27 00:16:24.935328 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-27 00:16:24.996346 | orchestrator | + docker version 2025-07-27 00:16:25.282911 | orchestrator | Client: Docker Engine - Community 2025-07-27 00:16:25.283011 | orchestrator | Version: 27.5.1 2025-07-27 00:16:25.283026 | orchestrator | API version: 1.47 2025-07-27 00:16:25.283040 | orchestrator | Go version: go1.22.11 2025-07-27 00:16:25.283052 | orchestrator | Git commit: 9f9e405 2025-07-27 00:16:25.283063 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-27 00:16:25.283075 | orchestrator | OS/Arch: linux/amd64 2025-07-27 00:16:25.283086 | orchestrator | Context: default 2025-07-27 00:16:25.283097 | orchestrator | 2025-07-27 00:16:25.283108 | orchestrator | Server: Docker Engine - Community 2025-07-27 00:16:25.283120 | orchestrator | Engine: 2025-07-27 00:16:25.283143 | orchestrator | Version: 27.5.1 2025-07-27 00:16:25.283155 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-27 00:16:25.283195 | orchestrator | Go version: go1.22.11 2025-07-27 00:16:25.283208 | orchestrator | Git commit: 4c9b3b0 2025-07-27 00:16:25.283218 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-27 00:16:25.283229 | orchestrator | OS/Arch: linux/amd64 2025-07-27 00:16:25.283240 | orchestrator | Experimental: false 2025-07-27 00:16:25.283251 | orchestrator | containerd: 2025-07-27 00:16:25.283266 | orchestrator | Version: 1.7.27 2025-07-27 00:16:25.283278 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-27 00:16:25.283289 | orchestrator | runc: 2025-07-27 00:16:25.283442 | orchestrator | Version: 1.2.5 2025-07-27 00:16:25.283458 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-27 00:16:25.283470 | orchestrator | docker-init: 2025-07-27 00:16:25.283928 | orchestrator | Version: 0.19.0 2025-07-27 00:16:25.283974 | orchestrator | GitCommit: de40ad0 2025-07-27 00:16:25.287166 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-27 00:16:25.296991 | orchestrator | + set -e 2025-07-27 00:16:25.298160 | orchestrator | + source /opt/manager-vars.sh 2025-07-27 00:16:25.298184 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-27 00:16:25.298200 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-27 00:16:25.298212 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-27 00:16:25.298223 | orchestrator | ++ CEPH_VERSION=reef 2025-07-27 00:16:25.298234 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-27 00:16:25.298246 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-27 00:16:25.298257 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-27 00:16:25.298268 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-27 00:16:25.298279 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-27 00:16:25.298289 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-27 00:16:25.298300 | orchestrator | ++ export ARA=false 2025-07-27 00:16:25.298311 | orchestrator | ++ ARA=false 2025-07-27 00:16:25.298322 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-27 00:16:25.298333 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-27 00:16:25.298344 | orchestrator | ++ export TEMPEST=true 2025-07-27 00:16:25.298355 | orchestrator | ++ TEMPEST=true 2025-07-27 00:16:25.298366 | orchestrator | ++ export IS_ZUUL=true 2025-07-27 00:16:25.298376 | orchestrator | ++ IS_ZUUL=true 2025-07-27 00:16:25.298387 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2025-07-27 00:16:25.298398 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2025-07-27 00:16:25.298409 | orchestrator | ++ export EXTERNAL_API=false 2025-07-27 00:16:25.298419 | orchestrator | ++ EXTERNAL_API=false 2025-07-27 00:16:25.298430 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-27 00:16:25.298440 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-27 00:16:25.298451 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-27 00:16:25.298461 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-27 00:16:25.298472 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-27 00:16:25.298483 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-27 00:16:25.298494 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-27 00:16:25.298504 | orchestrator | ++ export INTERACTIVE=false 2025-07-27 00:16:25.298515 | orchestrator | ++ INTERACTIVE=false 2025-07-27 00:16:25.298525 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-27 00:16:25.298540 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-27 00:16:25.298551 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-27 00:16:25.298562 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-27 00:16:25.298573 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-27 00:16:25.301974 | orchestrator | + set -e 2025-07-27 00:16:25.302000 | orchestrator | + VERSION=reef 2025-07-27 00:16:25.302844 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-27 00:16:25.309086 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-27 00:16:25.309132 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-27 00:16:25.314483 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-27 00:16:25.319856 | orchestrator | + set -e 2025-07-27 00:16:25.319917 | orchestrator | + VERSION=2024.2 2025-07-27 00:16:25.320636 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-27 00:16:25.324469 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-27 00:16:25.324502 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-27 00:16:25.331675 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-27 00:16:25.332157 | orchestrator | ++ semver latest 7.0.0 2025-07-27 00:16:25.396389 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-27 00:16:25.396482 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-27 00:16:25.396496 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-27 00:16:25.396509 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-27 00:16:25.479104 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-27 00:16:25.480087 | orchestrator | + source /opt/venv/bin/activate 2025-07-27 00:16:25.481010 | orchestrator | ++ deactivate nondestructive 2025-07-27 00:16:25.481036 | orchestrator | ++ '[' -n '' ']' 2025-07-27 00:16:25.481049 | orchestrator | ++ '[' -n '' ']' 2025-07-27 00:16:25.481061 | orchestrator | ++ hash -r 2025-07-27 00:16:25.481072 | orchestrator | ++ '[' -n '' ']' 2025-07-27 00:16:25.481083 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-27 00:16:25.481094 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-27 00:16:25.481111 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-27 00:16:25.481124 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-27 00:16:25.481137 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-27 00:16:25.481148 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-27 00:16:25.481160 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-27 00:16:25.481172 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-27 00:16:25.481315 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-27 00:16:25.481330 | orchestrator | ++ export PATH 2025-07-27 00:16:25.481342 | orchestrator | ++ '[' -n '' ']' 2025-07-27 00:16:25.481353 | orchestrator | ++ '[' -z '' ']' 2025-07-27 00:16:25.481363 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-27 00:16:25.481374 | orchestrator | ++ PS1='(venv) ' 2025-07-27 00:16:25.481384 | orchestrator | ++ export PS1 2025-07-27 00:16:25.481395 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-27 00:16:25.481406 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-27 00:16:25.481417 | orchestrator | ++ hash -r 2025-07-27 00:16:25.481452 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-27 00:16:26.814696 | orchestrator | 2025-07-27 00:16:26.814842 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-27 00:16:26.814861 | orchestrator | 2025-07-27 00:16:26.814873 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-27 00:16:27.395869 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:27.395977 | orchestrator | 2025-07-27 00:16:27.395993 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-27 00:16:28.412438 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:28.412573 | orchestrator | 2025-07-27 00:16:28.412602 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-27 00:16:28.412624 | orchestrator | 2025-07-27 00:16:28.412644 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:16:30.933156 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:30.933255 | orchestrator | 2025-07-27 00:16:30.933270 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-27 00:16:30.992840 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:30.992955 | orchestrator | 2025-07-27 00:16:30.992975 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-27 00:16:31.474686 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:31.474856 | orchestrator | 2025-07-27 00:16:31.474874 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-27 00:16:31.519666 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:16:31.519786 | orchestrator | 2025-07-27 00:16:31.519801 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-27 00:16:31.879798 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:31.879944 | orchestrator | 2025-07-27 00:16:31.879972 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-27 00:16:31.931088 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:16:31.931193 | orchestrator | 2025-07-27 00:16:31.931207 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-27 00:16:32.291500 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:32.291602 | orchestrator | 2025-07-27 00:16:32.291619 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-27 00:16:32.429924 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:16:32.430083 | orchestrator | 2025-07-27 00:16:32.430103 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-27 00:16:32.430116 | orchestrator | 2025-07-27 00:16:32.430130 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:16:34.263979 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:34.264086 | orchestrator | 2025-07-27 00:16:34.264104 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-27 00:16:34.381519 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-27 00:16:34.381627 | orchestrator | 2025-07-27 00:16:34.381652 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-27 00:16:34.436577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-27 00:16:34.436673 | orchestrator | 2025-07-27 00:16:34.436687 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-27 00:16:35.530830 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-27 00:16:35.530958 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-27 00:16:35.530983 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-27 00:16:35.531005 | orchestrator | 2025-07-27 00:16:35.531026 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-27 00:16:37.190538 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-27 00:16:37.190669 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-27 00:16:37.190698 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-27 00:16:37.190770 | orchestrator | 2025-07-27 00:16:37.190792 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-27 00:16:37.799397 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-27 00:16:37.799502 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:37.799518 | orchestrator | 2025-07-27 00:16:37.799531 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-27 00:16:38.406463 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-27 00:16:38.406568 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:38.406585 | orchestrator | 2025-07-27 00:16:38.406597 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-27 00:16:38.467549 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:16:38.467676 | orchestrator | 2025-07-27 00:16:38.467691 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-27 00:16:38.817247 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:38.817353 | orchestrator | 2025-07-27 00:16:38.817370 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-27 00:16:38.888054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-27 00:16:38.888155 | orchestrator | 2025-07-27 00:16:38.888170 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-27 00:16:39.976010 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:39.976151 | orchestrator | 2025-07-27 00:16:39.976170 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-27 00:16:40.797909 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:40.798109 | orchestrator | 2025-07-27 00:16:40.798244 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-27 00:16:52.520209 | orchestrator | changed: [testbed-manager] 2025-07-27 00:16:52.520334 | orchestrator | 2025-07-27 00:16:52.520352 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-27 00:16:52.576832 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:16:52.576938 | orchestrator | 2025-07-27 00:16:52.576957 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-27 00:16:52.576970 | orchestrator | 2025-07-27 00:16:52.576982 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:16:54.468833 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:54.468947 | orchestrator | 2025-07-27 00:16:54.468995 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-27 00:16:54.589663 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-27 00:16:54.589815 | orchestrator | 2025-07-27 00:16:54.589833 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-27 00:16:54.660694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-27 00:16:54.660834 | orchestrator | 2025-07-27 00:16:54.660855 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-27 00:16:57.279666 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:57.279809 | orchestrator | 2025-07-27 00:16:57.279825 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-27 00:16:57.338093 | orchestrator | ok: [testbed-manager] 2025-07-27 00:16:57.338187 | orchestrator | 2025-07-27 00:16:57.338204 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-27 00:16:57.465254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-27 00:16:57.465353 | orchestrator | 2025-07-27 00:16:57.465366 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-27 00:17:00.425020 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-27 00:17:00.425113 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-27 00:17:00.425123 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-27 00:17:00.425132 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-27 00:17:00.425140 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-27 00:17:00.425148 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-27 00:17:00.425155 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-27 00:17:00.425162 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-27 00:17:00.425169 | orchestrator | 2025-07-27 00:17:00.425177 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-27 00:17:01.096255 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:01.096365 | orchestrator | 2025-07-27 00:17:01.096382 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-27 00:17:01.742239 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:01.742343 | orchestrator | 2025-07-27 00:17:01.742360 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-27 00:17:01.822535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-27 00:17:01.822642 | orchestrator | 2025-07-27 00:17:01.822658 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-27 00:17:03.080506 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-27 00:17:03.080587 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-27 00:17:03.080594 | orchestrator | 2025-07-27 00:17:03.080602 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-27 00:17:03.703442 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:03.703545 | orchestrator | 2025-07-27 00:17:03.703561 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-27 00:17:03.759512 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:17:03.759609 | orchestrator | 2025-07-27 00:17:03.759624 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-27 00:17:03.814250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-27 00:17:03.814332 | orchestrator | 2025-07-27 00:17:03.814346 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-27 00:17:05.237798 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-27 00:17:05.237922 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-27 00:17:05.237939 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:05.237953 | orchestrator | 2025-07-27 00:17:05.237978 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-27 00:17:05.889214 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:05.889334 | orchestrator | 2025-07-27 00:17:05.889361 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-27 00:17:05.948537 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:17:05.948632 | orchestrator | 2025-07-27 00:17:05.948647 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-27 00:17:06.059692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-27 00:17:06.059841 | orchestrator | 2025-07-27 00:17:06.059858 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-27 00:17:06.607061 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:06.607166 | orchestrator | 2025-07-27 00:17:06.607183 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-27 00:17:08.025930 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:08.026165 | orchestrator | 2025-07-27 00:17:08.026186 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-27 00:17:09.317343 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-27 00:17:09.317453 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-27 00:17:09.317472 | orchestrator | 2025-07-27 00:17:09.317488 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-27 00:17:09.959621 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:09.959774 | orchestrator | 2025-07-27 00:17:09.959795 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-27 00:17:10.391313 | orchestrator | ok: [testbed-manager] 2025-07-27 00:17:10.391416 | orchestrator | 2025-07-27 00:17:10.391431 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-27 00:17:10.762467 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:10.762571 | orchestrator | 2025-07-27 00:17:10.762588 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-27 00:17:10.815888 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:17:10.815975 | orchestrator | 2025-07-27 00:17:10.815988 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-27 00:17:10.902211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-27 00:17:10.902311 | orchestrator | 2025-07-27 00:17:10.902334 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-27 00:17:10.953992 | orchestrator | ok: [testbed-manager] 2025-07-27 00:17:10.954139 | orchestrator | 2025-07-27 00:17:10.954155 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-27 00:17:13.079363 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-27 00:17:13.079467 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-27 00:17:13.079482 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-27 00:17:13.079494 | orchestrator | 2025-07-27 00:17:13.079507 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-27 00:17:13.808808 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:13.808925 | orchestrator | 2025-07-27 00:17:13.808943 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-27 00:17:14.530571 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:14.530676 | orchestrator | 2025-07-27 00:17:14.530692 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-27 00:17:15.265978 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:15.266134 | orchestrator | 2025-07-27 00:17:15.266152 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-27 00:17:15.353575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-27 00:17:15.353649 | orchestrator | 2025-07-27 00:17:15.353656 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-27 00:17:15.398120 | orchestrator | ok: [testbed-manager] 2025-07-27 00:17:15.398211 | orchestrator | 2025-07-27 00:17:15.398221 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-27 00:17:16.096208 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-27 00:17:16.096329 | orchestrator | 2025-07-27 00:17:16.096351 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-27 00:17:16.190294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-27 00:17:16.190399 | orchestrator | 2025-07-27 00:17:16.190415 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-27 00:17:16.936030 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:16.936112 | orchestrator | 2025-07-27 00:17:16.936120 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-27 00:17:17.590306 | orchestrator | ok: [testbed-manager] 2025-07-27 00:17:17.590431 | orchestrator | 2025-07-27 00:17:17.590450 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-27 00:17:17.651141 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:17:17.651246 | orchestrator | 2025-07-27 00:17:17.651263 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-27 00:17:17.711085 | orchestrator | ok: [testbed-manager] 2025-07-27 00:17:17.711203 | orchestrator | 2025-07-27 00:17:17.711223 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-27 00:17:18.575559 | orchestrator | changed: [testbed-manager] 2025-07-27 00:17:18.575685 | orchestrator | 2025-07-27 00:17:18.575704 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-27 00:18:25.457566 | orchestrator | changed: [testbed-manager] 2025-07-27 00:18:25.457691 | orchestrator | 2025-07-27 00:18:25.457711 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-27 00:18:26.467031 | orchestrator | ok: [testbed-manager] 2025-07-27 00:18:26.467156 | orchestrator | 2025-07-27 00:18:26.467174 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-27 00:18:26.525156 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:18:26.525259 | orchestrator | 2025-07-27 00:18:26.525275 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-27 00:18:29.213379 | orchestrator | changed: [testbed-manager] 2025-07-27 00:18:29.213466 | orchestrator | 2025-07-27 00:18:29.213478 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-27 00:18:29.261088 | orchestrator | ok: [testbed-manager] 2025-07-27 00:18:29.261183 | orchestrator | 2025-07-27 00:18:29.261198 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-27 00:18:29.261211 | orchestrator | 2025-07-27 00:18:29.261222 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-27 00:18:29.330089 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:18:29.330183 | orchestrator | 2025-07-27 00:18:29.330199 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-27 00:19:29.387127 | orchestrator | Pausing for 60 seconds 2025-07-27 00:19:29.387271 | orchestrator | changed: [testbed-manager] 2025-07-27 00:19:29.387290 | orchestrator | 2025-07-27 00:19:29.387303 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-27 00:19:33.981316 | orchestrator | changed: [testbed-manager] 2025-07-27 00:19:33.981431 | orchestrator | 2025-07-27 00:19:33.981448 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-27 00:20:15.794821 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-27 00:20:15.794947 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-27 00:20:15.794965 | orchestrator | changed: [testbed-manager] 2025-07-27 00:20:15.794980 | orchestrator | 2025-07-27 00:20:15.794992 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-27 00:20:25.715180 | orchestrator | changed: [testbed-manager] 2025-07-27 00:20:25.715296 | orchestrator | 2025-07-27 00:20:25.715313 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-27 00:20:25.795575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-27 00:20:25.795721 | orchestrator | 2025-07-27 00:20:25.795743 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-27 00:20:25.795760 | orchestrator | 2025-07-27 00:20:25.795809 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-27 00:20:25.843986 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:20:25.844090 | orchestrator | 2025-07-27 00:20:25.844105 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:20:25.844118 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-27 00:20:25.844129 | orchestrator | 2025-07-27 00:20:25.939448 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-27 00:20:25.939531 | orchestrator | + deactivate 2025-07-27 00:20:25.939546 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-27 00:20:25.939559 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-27 00:20:25.939570 | orchestrator | + export PATH 2025-07-27 00:20:25.939582 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-27 00:20:25.939594 | orchestrator | + '[' -n '' ']' 2025-07-27 00:20:25.939605 | orchestrator | + hash -r 2025-07-27 00:20:25.939616 | orchestrator | + '[' -n '' ']' 2025-07-27 00:20:25.939626 | orchestrator | + unset VIRTUAL_ENV 2025-07-27 00:20:25.939637 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-27 00:20:25.939670 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-27 00:20:25.939682 | orchestrator | + unset -f deactivate 2025-07-27 00:20:25.939693 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-27 00:20:25.946915 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-27 00:20:25.946968 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-27 00:20:25.946980 | orchestrator | + local max_attempts=60 2025-07-27 00:20:25.946992 | orchestrator | + local name=ceph-ansible 2025-07-27 00:20:25.947003 | orchestrator | + local attempt_num=1 2025-07-27 00:20:25.948009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-27 00:20:25.978956 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-27 00:20:25.979038 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-27 00:20:25.979054 | orchestrator | + local max_attempts=60 2025-07-27 00:20:25.979067 | orchestrator | + local name=kolla-ansible 2025-07-27 00:20:25.979078 | orchestrator | + local attempt_num=1 2025-07-27 00:20:25.979650 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-27 00:20:26.018661 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-27 00:20:26.018744 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-27 00:20:26.018758 | orchestrator | + local max_attempts=60 2025-07-27 00:20:26.018795 | orchestrator | + local name=osism-ansible 2025-07-27 00:20:26.018817 | orchestrator | + local attempt_num=1 2025-07-27 00:20:26.019865 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-27 00:20:26.058503 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-27 00:20:26.058598 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-27 00:20:26.058614 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-27 00:20:26.779911 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-27 00:20:26.972402 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-27 00:20:26.972511 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972527 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972540 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-27 00:20:26.972553 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-27 00:20:26.972600 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972613 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972624 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-27 00:20:26.972635 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972647 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-27 00:20:26.972657 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972668 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-27 00:20:26.972679 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972690 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.972701 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-27 00:20:26.981048 | orchestrator | ++ semver latest 7.0.0 2025-07-27 00:20:27.036001 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-27 00:20:27.036089 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-27 00:20:27.036100 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-27 00:20:27.041091 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-27 00:20:39.149585 | orchestrator | 2025-07-27 00:20:39 | INFO  | Task e3e5675e-be2b-400a-866b-ecc488716916 (resolvconf) was prepared for execution. 2025-07-27 00:20:39.149660 | orchestrator | 2025-07-27 00:20:39 | INFO  | It takes a moment until task e3e5675e-be2b-400a-866b-ecc488716916 (resolvconf) has been started and output is visible here. 2025-07-27 00:20:57.510413 | orchestrator | 2025-07-27 00:20:57.510563 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-27 00:20:57.510602 | orchestrator | 2025-07-27 00:20:57.510626 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-27 00:20:57.510647 | orchestrator | Sunday 27 July 2025 00:20:44 +0000 (0:00:00.104) 0:00:00.104 *********** 2025-07-27 00:20:57.510668 | orchestrator | ok: [testbed-manager] 2025-07-27 00:20:57.510688 | orchestrator | 2025-07-27 00:20:57.510709 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-27 00:20:57.510731 | orchestrator | Sunday 27 July 2025 00:20:48 +0000 (0:00:03.704) 0:00:03.809 *********** 2025-07-27 00:20:57.510753 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:20:57.510820 | orchestrator | 2025-07-27 00:20:57.510841 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-27 00:20:57.510853 | orchestrator | Sunday 27 July 2025 00:20:48 +0000 (0:00:00.070) 0:00:03.880 *********** 2025-07-27 00:20:57.510890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-27 00:20:57.510904 | orchestrator | 2025-07-27 00:20:57.510917 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-27 00:20:57.510929 | orchestrator | Sunday 27 July 2025 00:20:48 +0000 (0:00:00.105) 0:00:03.985 *********** 2025-07-27 00:20:57.510942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-27 00:20:57.510954 | orchestrator | 2025-07-27 00:20:57.510967 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-27 00:20:57.510979 | orchestrator | Sunday 27 July 2025 00:20:48 +0000 (0:00:00.092) 0:00:04.078 *********** 2025-07-27 00:20:57.510992 | orchestrator | ok: [testbed-manager] 2025-07-27 00:20:57.511004 | orchestrator | 2025-07-27 00:20:57.511016 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-27 00:20:57.511028 | orchestrator | Sunday 27 July 2025 00:20:50 +0000 (0:00:01.498) 0:00:05.577 *********** 2025-07-27 00:20:57.511040 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:20:57.511052 | orchestrator | 2025-07-27 00:20:57.511064 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-27 00:20:57.511076 | orchestrator | Sunday 27 July 2025 00:20:50 +0000 (0:00:00.057) 0:00:05.634 *********** 2025-07-27 00:20:57.511088 | orchestrator | ok: [testbed-manager] 2025-07-27 00:20:57.511100 | orchestrator | 2025-07-27 00:20:57.511112 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-27 00:20:57.511124 | orchestrator | Sunday 27 July 2025 00:20:51 +0000 (0:00:00.727) 0:00:06.362 *********** 2025-07-27 00:20:57.511136 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:20:57.511148 | orchestrator | 2025-07-27 00:20:57.511161 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-27 00:20:57.511174 | orchestrator | Sunday 27 July 2025 00:20:51 +0000 (0:00:00.090) 0:00:06.453 *********** 2025-07-27 00:20:57.511185 | orchestrator | changed: [testbed-manager] 2025-07-27 00:20:57.511198 | orchestrator | 2025-07-27 00:20:57.511210 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-27 00:20:57.511223 | orchestrator | Sunday 27 July 2025 00:20:52 +0000 (0:00:00.982) 0:00:07.436 *********** 2025-07-27 00:20:57.511236 | orchestrator | changed: [testbed-manager] 2025-07-27 00:20:57.511249 | orchestrator | 2025-07-27 00:20:57.511262 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-27 00:20:57.511273 | orchestrator | Sunday 27 July 2025 00:20:53 +0000 (0:00:01.663) 0:00:09.099 *********** 2025-07-27 00:20:57.511284 | orchestrator | ok: [testbed-manager] 2025-07-27 00:20:57.511294 | orchestrator | 2025-07-27 00:20:57.511323 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-27 00:20:57.511335 | orchestrator | Sunday 27 July 2025 00:20:55 +0000 (0:00:01.477) 0:00:10.576 *********** 2025-07-27 00:20:57.511346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-27 00:20:57.511357 | orchestrator | 2025-07-27 00:20:57.511378 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-27 00:20:57.511389 | orchestrator | Sunday 27 July 2025 00:20:55 +0000 (0:00:00.090) 0:00:10.667 *********** 2025-07-27 00:20:57.511399 | orchestrator | changed: [testbed-manager] 2025-07-27 00:20:57.511410 | orchestrator | 2025-07-27 00:20:57.511421 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:20:57.511432 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-27 00:20:57.511443 | orchestrator | 2025-07-27 00:20:57.511454 | orchestrator | 2025-07-27 00:20:57.511465 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-27 00:20:57.511483 | orchestrator | Sunday 27 July 2025 00:20:56 +0000 (0:00:01.602) 0:00:12.270 *********** 2025-07-27 00:20:57.511494 | orchestrator | =============================================================================== 2025-07-27 00:20:57.511505 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2025-07-27 00:20:57.511515 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.66s 2025-07-27 00:20:57.511526 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.60s 2025-07-27 00:20:57.511537 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.50s 2025-07-27 00:20:57.511547 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.48s 2025-07-27 00:20:57.511558 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.98s 2025-07-27 00:20:57.511589 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.73s 2025-07-27 00:20:57.511601 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-07-27 00:20:57.511611 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-27 00:20:57.511622 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-27 00:20:57.511633 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-27 00:20:57.511644 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-27 00:20:57.511655 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-27 00:20:57.795202 | orchestrator | + osism apply sshconfig 2025-07-27 00:21:09.700179 | orchestrator | 2025-07-27 00:21:09 | INFO  | Task 3c22b75a-32cb-4e18-b0dc-79f5d9aaa658 (sshconfig) was prepared for execution. 2025-07-27 00:21:09.700292 | orchestrator | 2025-07-27 00:21:09 | INFO  | It takes a moment until task 3c22b75a-32cb-4e18-b0dc-79f5d9aaa658 (sshconfig) has been started and output is visible here. 2025-07-27 00:21:25.555963 | orchestrator | 2025-07-27 00:21:25.556075 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-27 00:21:25.556092 | orchestrator | 2025-07-27 00:21:25.556103 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-27 00:21:25.556114 | orchestrator | Sunday 27 July 2025 00:21:15 +0000 (0:00:00.108) 0:00:00.108 *********** 2025-07-27 00:21:25.556125 | orchestrator | ok: [testbed-manager] 2025-07-27 00:21:25.556137 | orchestrator | 2025-07-27 00:21:25.556148 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-27 00:21:25.556158 | orchestrator | Sunday 27 July 2025 00:21:15 +0000 (0:00:00.630) 0:00:00.739 *********** 2025-07-27 00:21:25.556169 | orchestrator | changed: [testbed-manager] 2025-07-27 00:21:25.556181 | orchestrator | 2025-07-27 00:21:25.556191 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-27 00:21:25.556202 | orchestrator | Sunday 27 July 2025 00:21:16 +0000 (0:00:00.778) 0:00:01.518 *********** 2025-07-27 00:21:25.556212 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-27 00:21:25.556223 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-27 00:21:25.556234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-27 00:21:25.556244 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-27 00:21:25.556255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-27 00:21:25.556265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-27 00:21:25.556298 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-27 00:21:25.556309 | orchestrator | 2025-07-27 00:21:25.556320 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-27 00:21:25.556331 | orchestrator | Sunday 27 July 2025 00:21:24 +0000 (0:00:07.584) 0:00:09.102 *********** 2025-07-27 00:21:25.556366 | orchestrator | skipping: [testbed-manager] 2025-07-27 00:21:25.556378 | orchestrator | 2025-07-27 00:21:25.556388 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-27 00:21:25.556399 | orchestrator | Sunday 27 July 2025 00:21:24 +0000 (0:00:00.060) 0:00:09.163 *********** 2025-07-27 00:21:25.556409 | orchestrator | changed: [testbed-manager] 2025-07-27 00:21:25.556420 | orchestrator | 2025-07-27 00:21:25.556431 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:21:25.556443 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-27 00:21:25.556454 | orchestrator | 2025-07-27 00:21:25.556465 | orchestrator | 2025-07-27 00:21:25.556476 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-27 00:21:25.556486 | orchestrator | Sunday 27 July 2025 00:21:24 +0000 (0:00:00.749) 0:00:09.912 *********** 2025-07-27 00:21:25.556497 | orchestrator | =============================================================================== 2025-07-27 00:21:25.556510 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 7.58s 2025-07-27 00:21:25.556522 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.78s 2025-07-27 00:21:25.556534 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.75s 2025-07-27 00:21:25.556547 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-07-27 00:21:25.556559 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-27 00:21:25.832752 | orchestrator | + osism apply known-hosts 2025-07-27 00:21:37.777752 | orchestrator | 2025-07-27 00:21:37 | INFO  | Task ce63ee34-83bc-4811-a033-ca89146a8eb4 (known-hosts) was prepared for execution. 2025-07-27 00:21:37.777922 | orchestrator | 2025-07-27 00:21:37 | INFO  | It takes a moment until task ce63ee34-83bc-4811-a033-ca89146a8eb4 (known-hosts) has been started and output is visible here. 2025-07-27 00:21:51.275887 | orchestrator | 2025-07-27 00:21:51 | INFO  | Task 40cc86ff-4a37-4d62-9d74-2bcc36f0211f (known-hosts) was prepared for execution. 2025-07-27 00:21:51.276014 | orchestrator | 2025-07-27 00:21:51 | INFO  | It takes a moment until task 40cc86ff-4a37-4d62-9d74-2bcc36f0211f (known-hosts) has been started and output is visible here. 2025-07-27 00:22:04.214926 | orchestrator | 2025-07-27 00:22:04.215062 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-27 00:22:04.215093 | orchestrator | 2025-07-27 00:22:04.215115 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-27 00:22:04.215136 | orchestrator | Sunday 27 July 2025 00:21:43 +0000 (0:00:00.111) 0:00:00.111 *********** 2025-07-27 00:22:04.215149 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-27 00:22:04.215161 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-27 00:22:04.215172 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-27 00:22:04.215183 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-27 00:22:04.215194 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-27 00:22:04.215205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-27 00:22:04.215216 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-27 00:22:04.215226 | orchestrator | 2025-07-27 00:22:04.215237 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-27 00:22:04.215249 | orchestrator | Sunday 27 July 2025 00:21:50 +0000 (0:00:06.913) 0:00:07.024 *********** 2025-07-27 00:22:04.215262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-27 00:22:04.215275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-27 00:22:04.215309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-27 00:22:04.215331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-27 00:22:04.215342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-27 00:22:04.215353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-27 00:22:04.215365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-27 00:22:04.215378 | orchestrator | 2025-07-27 00:22:04.215398 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-27 00:22:04.215416 | orchestrator | Sunday 27 July 2025 00:21:50 +0000 (0:00:00.178) 0:00:07.202 *********** 2025-07-27 00:22:04.215435 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-27 00:22:04.215457 | orchestrator |  2025-07-27 00:22:04.215477 | orchestrator | Task failed. 2025-07-27 00:22:04.215499 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-27 00:22:04.215520 | orchestrator |  2025-07-27 00:22:04.215542 | orchestrator | 1 --- 2025-07-27 00:22:04.215563 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-27 00:22:04.215583 | orchestrator |  ^ column 3 2025-07-27 00:22:04.215603 | orchestrator |  2025-07-27 00:22:04.215618 | orchestrator | <<< caused by >>> 2025-07-27 00:22:04.215631 | orchestrator |  2025-07-27 00:22:04.215644 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-27 00:22:04.215658 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-27 00:22:04.215670 | orchestrator |  2025-07-27 00:22:04.215683 | orchestrator | 10 when: 2025-07-27 00:22:04.215695 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-27 00:22:04.215708 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-27 00:22:04.215722 | orchestrator |  ^ column 7 2025-07-27 00:22:04.215733 | orchestrator |  2025-07-27 00:22:04.215744 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-27 00:22:04.215755 | orchestrator |  2025-07-27 00:22:04.215765 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMYO6U1joMcK1q63VZOZscZa04C5q1W/rzpfe7Ps7FEd) => changed=false  2025-07-27 00:22:04.215778 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.215820 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMYO6U1joMcK1q63VZOZscZa04C5q1W/rzpfe7Ps7FEd 2025-07-27 00:22:04.215833 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.215878 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi8j4R2msr2GpUsJo8JX8SeUkSfM/qhzWrLHvWj/7+c3o/zMH07O/CzOmujFskB3kksir8DqTxlMxuDnrj3o1mHG8D2RrqyEiBOPEWyknVyScxBwivn7ojQiipyfP+So0MOqVHDYLKB4uVLq6FQZ6/RwIAdYIfLHNQy8POESB04I/GboHKIeV25SZG9wXjcj3o4jNwGnn6a9TNRJbv9nNVxe2wd/8fONc9aLyaOwX5atWhKDyYuLPSQE+X0jI1kV5bX61w6FGA5Qfs1uxRmBYRqOehx4ZbIGEKJsmhOIGZwX3TGiTT+0Z3EWghku0s9mkG2JXKHuDU5DRK8r9MgkDq+ayOZKL07RKcJw0GX8GInKfc3zedWz80aK2sacAamBkcMziQbYnt9WulhJhpU4VP7alBKMCk3VIGy5uX7zgy36KMoyAP1DkybSRWbveeXq63IxtYI60SOSAxcRABjKZ7slisRR1Ihlckqs7SeWLEHlb48rQ7Fa7FUOrcvwmCg4c=) => changed=false  2025-07-27 00:22:04.215905 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.215918 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi8j4R2msr2GpUsJo8JX8SeUkSfM/qhzWrLHvWj/7+c3o/zMH07O/CzOmujFskB3kksir8DqTxlMxuDnrj3o1mHG8D2RrqyEiBOPEWyknVyScxBwivn7ojQiipyfP+So0MOqVHDYLKB4uVLq6FQZ6/RwIAdYIfLHNQy8POESB04I/GboHKIeV25SZG9wXjcj3o4jNwGnn6a9TNRJbv9nNVxe2wd/8fONc9aLyaOwX5atWhKDyYuLPSQE+X0jI1kV5bX61w6FGA5Qfs1uxRmBYRqOehx4ZbIGEKJsmhOIGZwX3TGiTT+0Z3EWghku0s9mkG2JXKHuDU5DRK8r9MgkDq+ayOZKL07RKcJw0GX8GInKfc3zedWz80aK2sacAamBkcMziQbYnt9WulhJhpU4VP7alBKMCk3VIGy5uX7zgy36KMoyAP1DkybSRWbveeXq63IxtYI60SOSAxcRABjKZ7slisRR1Ihlckqs7SeWLEHlb48rQ7Fa7FUOrcvwmCg4c= 2025-07-27 00:22:04.215930 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.215942 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNze2l7s6Qf+ia5DLXvGsvR/crjFToS9kqq0eNtO6zpuGbEEWA01e/NuEzHW16bZkHWgEi6As6U22Psi8IoIv00=) => changed=false  2025-07-27 00:22:04.216019 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.216031 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNze2l7s6Qf+ia5DLXvGsvR/crjFToS9kqq0eNtO6zpuGbEEWA01e/NuEzHW16bZkHWgEi6As6U22Psi8IoIv00= 2025-07-27 00:22:04.216042 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.216055 | orchestrator | 2025-07-27 00:22:04.216074 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:22:04.216092 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-27 00:22:04.216112 | orchestrator | 2025-07-27 00:22:04.216132 | orchestrator | 2025-07-27 00:22:04.216152 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-27 00:22:04.216168 | orchestrator | Sunday 27 July 2025 00:21:50 +0000 (0:00:00.100) 0:00:07.303 *********** 2025-07-27 00:22:04.216180 | orchestrator | =============================================================================== 2025-07-27 00:22:04.216190 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.91s 2025-07-27 00:22:04.216201 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-07-27 00:22:04.216212 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-27 00:22:04.216223 | orchestrator | 2025-07-27 00:22:04.216233 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-27 00:22:04.216244 | orchestrator | 2025-07-27 00:22:04.216254 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-27 00:22:04.216265 | orchestrator | Sunday 27 July 2025 00:21:57 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-07-27 00:22:04.216275 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-27 00:22:04.216286 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-27 00:22:04.216296 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-27 00:22:04.216307 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-27 00:22:04.216318 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-27 00:22:04.216328 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-27 00:22:04.216339 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-27 00:22:04.216349 | orchestrator | 2025-07-27 00:22:04.216360 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-27 00:22:04.216371 | orchestrator | Sunday 27 July 2025 00:22:04 +0000 (0:00:06.607) 0:00:06.757 *********** 2025-07-27 00:22:04.216395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-27 00:22:04.216414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-27 00:22:04.216433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-27 00:22:04.216464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-27 00:22:04.841423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-27 00:22:04.841528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-27 00:22:04.841542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-27 00:22:04.841555 | orchestrator | 2025-07-27 00:22:04.841567 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-27 00:22:04.841580 | orchestrator | Sunday 27 July 2025 00:22:04 +0000 (0:00:00.193) 0:00:06.950 *********** 2025-07-27 00:22:04.841591 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-27 00:22:04.841604 | orchestrator |  2025-07-27 00:22:04.841615 | orchestrator | Task failed. 2025-07-27 00:22:04.841628 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-27 00:22:04.841640 | orchestrator |  2025-07-27 00:22:04.841651 | orchestrator | 1 --- 2025-07-27 00:22:04.841662 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-27 00:22:04.841673 | orchestrator |  ^ column 3 2025-07-27 00:22:04.841684 | orchestrator |  2025-07-27 00:22:04.841695 | orchestrator | <<< caused by >>> 2025-07-27 00:22:04.841705 | orchestrator |  2025-07-27 00:22:04.841717 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-27 00:22:04.841728 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-27 00:22:04.841739 | orchestrator |  2025-07-27 00:22:04.841750 | orchestrator | 10 when: 2025-07-27 00:22:04.841762 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-27 00:22:04.841773 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-27 00:22:04.841817 | orchestrator |  ^ column 7 2025-07-27 00:22:04.841831 | orchestrator |  2025-07-27 00:22:04.841862 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-27 00:22:04.841874 | orchestrator |  2025-07-27 00:22:04.841885 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNze2l7s6Qf+ia5DLXvGsvR/crjFToS9kqq0eNtO6zpuGbEEWA01e/NuEzHW16bZkHWgEi6As6U22Psi8IoIv00=) => changed=false  2025-07-27 00:22:04.841898 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.841910 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNze2l7s6Qf+ia5DLXvGsvR/crjFToS9kqq0eNtO6zpuGbEEWA01e/NuEzHW16bZkHWgEi6As6U22Psi8IoIv00= 2025-07-27 00:22:04.841944 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.841962 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi8j4R2msr2GpUsJo8JX8SeUkSfM/qhzWrLHvWj/7+c3o/zMH07O/CzOmujFskB3kksir8DqTxlMxuDnrj3o1mHG8D2RrqyEiBOPEWyknVyScxBwivn7ojQiipyfP+So0MOqVHDYLKB4uVLq6FQZ6/RwIAdYIfLHNQy8POESB04I/GboHKIeV25SZG9wXjcj3o4jNwGnn6a9TNRJbv9nNVxe2wd/8fONc9aLyaOwX5atWhKDyYuLPSQE+X0jI1kV5bX61w6FGA5Qfs1uxRmBYRqOehx4ZbIGEKJsmhOIGZwX3TGiTT+0Z3EWghku0s9mkG2JXKHuDU5DRK8r9MgkDq+ayOZKL07RKcJw0GX8GInKfc3zedWz80aK2sacAamBkcMziQbYnt9WulhJhpU4VP7alBKMCk3VIGy5uX7zgy36KMoyAP1DkybSRWbveeXq63IxtYI60SOSAxcRABjKZ7slisRR1Ihlckqs7SeWLEHlb48rQ7Fa7FUOrcvwmCg4c=) => changed=false  2025-07-27 00:22:04.841978 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.841992 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi8j4R2msr2GpUsJo8JX8SeUkSfM/qhzWrLHvWj/7+c3o/zMH07O/CzOmujFskB3kksir8DqTxlMxuDnrj3o1mHG8D2RrqyEiBOPEWyknVyScxBwivn7ojQiipyfP+So0MOqVHDYLKB4uVLq6FQZ6/RwIAdYIfLHNQy8POESB04I/GboHKIeV25SZG9wXjcj3o4jNwGnn6a9TNRJbv9nNVxe2wd/8fONc9aLyaOwX5atWhKDyYuLPSQE+X0jI1kV5bX61w6FGA5Qfs1uxRmBYRqOehx4ZbIGEKJsmhOIGZwX3TGiTT+0Z3EWghku0s9mkG2JXKHuDU5DRK8r9MgkDq+ayOZKL07RKcJw0GX8GInKfc3zedWz80aK2sacAamBkcMziQbYnt9WulhJhpU4VP7alBKMCk3VIGy5uX7zgy36KMoyAP1DkybSRWbveeXq63IxtYI60SOSAxcRABjKZ7slisRR1Ihlckqs7SeWLEHlb48rQ7Fa7FUOrcvwmCg4c= 2025-07-27 00:22:04.842005 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.842077 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMYO6U1joMcK1q63VZOZscZa04C5q1W/rzpfe7Ps7FEd) => changed=false  2025-07-27 00:22:04.842094 | orchestrator |  ansible_loop_var: inner_item 2025-07-27 00:22:04.842126 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMYO6U1joMcK1q63VZOZscZa04C5q1W/rzpfe7Ps7FEd 2025-07-27 00:22:04.842138 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-27 00:22:04.842149 | orchestrator | 2025-07-27 00:22:04.842160 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-27 00:22:04.842171 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-27 00:22:04.842181 | orchestrator | 2025-07-27 00:22:04.842192 | orchestrator | 2025-07-27 00:22:04.842203 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-27 00:22:04.842214 | orchestrator | Sunday 27 July 2025 00:22:04 +0000 (0:00:00.104) 0:00:07.054 *********** 2025-07-27 00:22:04.842224 | orchestrator | =============================================================================== 2025-07-27 00:22:04.842235 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.61s 2025-07-27 00:22:04.842246 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-27 00:22:04.842257 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-27 00:22:05.515121 | orchestrator | ERROR 2025-07-27 00:22:05.515298 | orchestrator | { 2025-07-27 00:22:05.515349 | orchestrator | "delta": "0:05:42.205622", 2025-07-27 00:22:05.515375 | orchestrator | "end": "2025-07-27 00:22:05.124398", 2025-07-27 00:22:05.515403 | orchestrator | "msg": "non-zero return code", 2025-07-27 00:22:05.515423 | orchestrator | "rc": 2, 2025-07-27 00:22:05.515442 | orchestrator | "start": "2025-07-27 00:16:22.918776" 2025-07-27 00:22:05.515460 | orchestrator | } failure 2025-07-27 00:22:05.522921 | 2025-07-27 00:22:05.522994 | PLAY RECAP 2025-07-27 00:22:05.523043 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-27 00:22:05.523068 | 2025-07-27 00:22:05.702512 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-27 00:22:05.704111 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-27 00:22:06.780878 | 2025-07-27 00:22:06.781054 | PLAY [Post output play] 2025-07-27 00:22:06.796565 | 2025-07-27 00:22:06.796813 | LOOP [stage-output : Register sources] 2025-07-27 00:22:06.857234 | 2025-07-27 00:22:06.857490 | TASK [stage-output : Check sudo] 2025-07-27 00:22:07.917398 | orchestrator | sudo: a password is required 2025-07-27 00:22:08.394773 | orchestrator | ok: Runtime: 0:00:00.273388 2025-07-27 00:22:08.401489 | 2025-07-27 00:22:08.401577 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-27 00:22:08.432504 | 2025-07-27 00:22:08.432684 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-27 00:22:08.501729 | orchestrator | ok 2025-07-27 00:22:08.509284 | 2025-07-27 00:22:08.509386 | LOOP [stage-output : Ensure target folders exist] 2025-07-27 00:22:09.060085 | orchestrator | ok: "docs" 2025-07-27 00:22:09.062883 | 2025-07-27 00:22:09.328162 | orchestrator | ok: "artifacts" 2025-07-27 00:22:09.581236 | orchestrator | ok: "logs" 2025-07-27 00:22:09.593100 | 2025-07-27 00:22:09.593211 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-27 00:22:09.631966 | 2025-07-27 00:22:09.632131 | TASK [stage-output : Make all log files readable] 2025-07-27 00:22:09.972919 | orchestrator | ok 2025-07-27 00:22:09.978307 | 2025-07-27 00:22:09.978418 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-27 00:22:10.024625 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:10.031250 | 2025-07-27 00:22:10.031362 | TASK [stage-output : Discover log files for compression] 2025-07-27 00:22:10.063957 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:10.069686 | 2025-07-27 00:22:10.069766 | LOOP [stage-output : Archive everything from logs] 2025-07-27 00:22:10.121480 | 2025-07-27 00:22:10.121598 | PLAY [Post cleanup play] 2025-07-27 00:22:10.136024 | 2025-07-27 00:22:10.136114 | TASK [Set cloud fact (Zuul deployment)] 2025-07-27 00:22:10.213535 | orchestrator | ok 2025-07-27 00:22:10.232439 | 2025-07-27 00:22:10.232534 | TASK [Set cloud fact (local deployment)] 2025-07-27 00:22:10.305720 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:10.312157 | 2025-07-27 00:22:10.312247 | TASK [Clean the cloud environment] 2025-07-27 00:22:11.178280 | orchestrator | 2025-07-27 00:22:11 - clean up servers 2025-07-27 00:22:11.943872 | orchestrator | 2025-07-27 00:22:11 - testbed-manager 2025-07-27 00:22:12.032713 | orchestrator | 2025-07-27 00:22:12 - testbed-node-5 2025-07-27 00:22:12.122257 | orchestrator | 2025-07-27 00:22:12 - testbed-node-3 2025-07-27 00:22:12.212047 | orchestrator | 2025-07-27 00:22:12 - testbed-node-4 2025-07-27 00:22:12.308680 | orchestrator | 2025-07-27 00:22:12 - testbed-node-1 2025-07-27 00:22:12.423097 | orchestrator | 2025-07-27 00:22:12 - testbed-node-2 2025-07-27 00:22:12.519027 | orchestrator | 2025-07-27 00:22:12 - testbed-node-0 2025-07-27 00:22:12.617528 | orchestrator | 2025-07-27 00:22:12 - clean up keypairs 2025-07-27 00:22:12.644954 | orchestrator | 2025-07-27 00:22:12 - testbed 2025-07-27 00:22:12.675139 | orchestrator | 2025-07-27 00:22:12 - wait for servers to be gone 2025-07-27 00:22:23.582391 | orchestrator | 2025-07-27 00:22:23 - clean up ports 2025-07-27 00:22:24.247768 | orchestrator | 2025-07-27 00:22:24 - 04e3087e-1ad2-4e03-8abf-4f8318afa926 2025-07-27 00:22:24.448480 | orchestrator | 2025-07-27 00:22:24 - 0ae57d18-17c3-4c5b-8236-a51edf7e745c 2025-07-27 00:22:24.694267 | orchestrator | 2025-07-27 00:22:24 - 1bca0290-746a-49bd-814f-9415c5f2297c 2025-07-27 00:22:24.942072 | orchestrator | 2025-07-27 00:22:24 - 218de33b-048d-412f-a98a-8881d2e6631e 2025-07-27 00:22:25.224341 | orchestrator | 2025-07-27 00:22:25 - 2c4d596d-ca33-4807-8d7e-3cd1168ed877 2025-07-27 00:22:25.487192 | orchestrator | 2025-07-27 00:22:25 - 4bddd341-3907-4051-a9c1-91d8305185b2 2025-07-27 00:22:25.706177 | orchestrator | 2025-07-27 00:22:25 - 8b2c3cf5-db15-481e-ba5f-69af5efec822 2025-07-27 00:22:26.090265 | orchestrator | 2025-07-27 00:22:26 - clean up volumes 2025-07-27 00:22:26.214308 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-1-node-base 2025-07-27 00:22:26.255467 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-3-node-base 2025-07-27 00:22:26.304516 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-5-node-base 2025-07-27 00:22:26.348229 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-0-node-base 2025-07-27 00:22:26.395204 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-4-node-base 2025-07-27 00:22:26.447167 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-2-node-base 2025-07-27 00:22:26.496159 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-7-node-4 2025-07-27 00:22:26.541246 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-5-node-5 2025-07-27 00:22:26.593853 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-1-node-4 2025-07-27 00:22:26.643942 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-manager-base 2025-07-27 00:22:26.687872 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-6-node-3 2025-07-27 00:22:26.734504 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-3-node-3 2025-07-27 00:22:26.784773 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-2-node-5 2025-07-27 00:22:26.826904 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-8-node-5 2025-07-27 00:22:26.876085 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-0-node-3 2025-07-27 00:22:26.923027 | orchestrator | 2025-07-27 00:22:26 - testbed-volume-4-node-4 2025-07-27 00:22:26.966212 | orchestrator | 2025-07-27 00:22:26 - disconnect routers 2025-07-27 00:22:27.079245 | orchestrator | 2025-07-27 00:22:27 - testbed 2025-07-27 00:22:28.087219 | orchestrator | 2025-07-27 00:22:28 - clean up subnets 2025-07-27 00:22:28.143150 | orchestrator | 2025-07-27 00:22:28 - subnet-testbed-management 2025-07-27 00:22:28.300662 | orchestrator | 2025-07-27 00:22:28 - clean up networks 2025-07-27 00:22:29.002224 | orchestrator | 2025-07-27 00:22:29 - net-testbed-management 2025-07-27 00:22:29.317051 | orchestrator | 2025-07-27 00:22:29 - clean up security groups 2025-07-27 00:22:29.358935 | orchestrator | 2025-07-27 00:22:29 - testbed-management 2025-07-27 00:22:29.482362 | orchestrator | 2025-07-27 00:22:29 - testbed-node 2025-07-27 00:22:29.624010 | orchestrator | 2025-07-27 00:22:29 - clean up floating ips 2025-07-27 00:22:29.661377 | orchestrator | 2025-07-27 00:22:29 - 81.163.192.221 2025-07-27 00:22:30.018246 | orchestrator | 2025-07-27 00:22:30 - clean up routers 2025-07-27 00:22:30.077924 | orchestrator | 2025-07-27 00:22:30 - testbed 2025-07-27 00:22:31.397467 | orchestrator | ok: Runtime: 0:00:20.271791 2025-07-27 00:22:31.402254 | 2025-07-27 00:22:31.402457 | PLAY RECAP 2025-07-27 00:22:31.402585 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-27 00:22:31.402649 | 2025-07-27 00:22:31.572639 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-27 00:22:31.573648 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-27 00:22:32.394031 | 2025-07-27 00:22:32.394266 | PLAY [Cleanup play] 2025-07-27 00:22:32.410545 | 2025-07-27 00:22:32.410681 | TASK [Set cloud fact (Zuul deployment)] 2025-07-27 00:22:32.461589 | orchestrator | ok 2025-07-27 00:22:32.468627 | 2025-07-27 00:22:32.468768 | TASK [Set cloud fact (local deployment)] 2025-07-27 00:22:32.503956 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:32.533492 | 2025-07-27 00:22:32.533688 | TASK [Clean the cloud environment] 2025-07-27 00:22:33.717713 | orchestrator | 2025-07-27 00:22:33 - clean up servers 2025-07-27 00:22:34.190744 | orchestrator | 2025-07-27 00:22:34 - clean up keypairs 2025-07-27 00:22:34.213144 | orchestrator | 2025-07-27 00:22:34 - wait for servers to be gone 2025-07-27 00:22:34.260557 | orchestrator | 2025-07-27 00:22:34 - clean up ports 2025-07-27 00:22:34.343567 | orchestrator | 2025-07-27 00:22:34 - clean up volumes 2025-07-27 00:22:34.406311 | orchestrator | 2025-07-27 00:22:34 - disconnect routers 2025-07-27 00:22:34.440635 | orchestrator | 2025-07-27 00:22:34 - clean up subnets 2025-07-27 00:22:34.465159 | orchestrator | 2025-07-27 00:22:34 - clean up networks 2025-07-27 00:22:34.684702 | orchestrator | 2025-07-27 00:22:34 - clean up security groups 2025-07-27 00:22:34.750429 | orchestrator | 2025-07-27 00:22:34 - clean up floating ips 2025-07-27 00:22:34.778283 | orchestrator | 2025-07-27 00:22:34 - clean up routers 2025-07-27 00:22:35.079968 | orchestrator | ok: Runtime: 0:00:01.472347 2025-07-27 00:22:35.084486 | 2025-07-27 00:22:35.084668 | PLAY RECAP 2025-07-27 00:22:35.084804 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-27 00:22:35.084874 | 2025-07-27 00:22:35.220005 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-27 00:22:35.222537 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-27 00:22:36.013172 | 2025-07-27 00:22:36.013366 | PLAY [Base post-fetch] 2025-07-27 00:22:36.029768 | 2025-07-27 00:22:36.029950 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-27 00:22:36.106735 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:36.120723 | 2025-07-27 00:22:36.120956 | TASK [fetch-output : Set log path for single node] 2025-07-27 00:22:36.182885 | orchestrator | ok 2025-07-27 00:22:36.197190 | 2025-07-27 00:22:36.197519 | LOOP [fetch-output : Ensure local output dirs] 2025-07-27 00:22:36.706007 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/logs" 2025-07-27 00:22:37.010713 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/artifacts" 2025-07-27 00:22:37.279079 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a37dd791e3bb41e7ac00ecb3823bd6ca/work/docs" 2025-07-27 00:22:37.310560 | 2025-07-27 00:22:37.310730 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-27 00:22:38.238100 | orchestrator | changed: .d..t...... ./ 2025-07-27 00:22:38.238380 | orchestrator | changed: All items complete 2025-07-27 00:22:38.238419 | 2025-07-27 00:22:39.025318 | orchestrator | changed: .d..t...... ./ 2025-07-27 00:22:39.807779 | orchestrator | changed: .d..t...... ./ 2025-07-27 00:22:39.835661 | 2025-07-27 00:22:39.835803 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-27 00:22:39.864112 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:39.868394 | orchestrator | skipping: Conditional result was False 2025-07-27 00:22:39.881000 | 2025-07-27 00:22:39.881178 | PLAY RECAP 2025-07-27 00:22:39.881248 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-27 00:22:39.881280 | 2025-07-27 00:22:40.018912 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-27 00:22:40.019923 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-27 00:22:40.762747 | 2025-07-27 00:22:40.762918 | PLAY [Base post] 2025-07-27 00:22:40.776544 | 2025-07-27 00:22:40.776666 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-27 00:22:42.076627 | orchestrator | changed 2025-07-27 00:22:42.088224 | 2025-07-27 00:22:42.088378 | PLAY RECAP 2025-07-27 00:22:42.088459 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-27 00:22:42.088536 | 2025-07-27 00:22:42.191504 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-27 00:22:42.192385 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-27 00:22:43.016708 | 2025-07-27 00:22:43.016844 | PLAY [Base post-logs] 2025-07-27 00:22:43.026433 | 2025-07-27 00:22:43.026543 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-27 00:22:43.415559 | localhost | changed 2025-07-27 00:22:43.424576 | 2025-07-27 00:22:43.424700 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-27 00:22:43.449155 | localhost | ok 2025-07-27 00:22:43.452095 | 2025-07-27 00:22:43.452178 | TASK [Set zuul-log-path fact] 2025-07-27 00:22:43.476910 | localhost | ok 2025-07-27 00:22:43.486993 | 2025-07-27 00:22:43.487107 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-27 00:22:43.521648 | localhost | ok 2025-07-27 00:22:43.524613 | 2025-07-27 00:22:43.524703 | TASK [upload-logs : Create log directories] 2025-07-27 00:22:43.996730 | localhost | changed 2025-07-27 00:22:43.999457 | 2025-07-27 00:22:43.999551 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-27 00:22:44.461163 | localhost -> localhost | ok: Runtime: 0:00:00.006890 2025-07-27 00:22:44.464679 | 2025-07-27 00:22:44.464778 | TASK [upload-logs : Upload logs to log server] 2025-07-27 00:22:44.954961 | localhost | Output suppressed because no_log was given 2025-07-27 00:22:44.956635 | 2025-07-27 00:22:44.956722 | LOOP [upload-logs : Compress console log and json output] 2025-07-27 00:22:45.001344 | localhost | skipping: Conditional result was False 2025-07-27 00:22:45.006760 | localhost | skipping: Conditional result was False 2025-07-27 00:22:45.012070 | 2025-07-27 00:22:45.012162 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-27 00:22:45.053749 | localhost | skipping: Conditional result was False 2025-07-27 00:22:45.054007 | 2025-07-27 00:22:45.060070 | localhost | skipping: Conditional result was False 2025-07-27 00:22:45.069498 | 2025-07-27 00:22:45.069665 | LOOP [upload-logs : Upload console log and json output]