Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 2h15m |
Revision | release-1.5 |
... skipping 172 lines ... #18 exporting to image #18 exporting layers #18 exporting layers 0.4s done #18 writing image sha256:4eed95c2b34108eb7bf421aa1777f596601f9f75114b9e08c6d45494315f7a1d done #18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done #18 DONE 0.4s WARNING: failed to get git remote url: fatal: No remote configured to list refs from. Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]... / [0 files][ 0.0 B/ 73.8 MiB] - - [1 files][ 73.8 MiB/ 73.8 MiB] Operation completed over 1 objects/73.8 MiB. make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools' ... skipping 122 lines ... #18 exporting to image #18 exporting layers done #18 writing image sha256:4eed95c2b34108eb7bf421aa1777f596601f9f75114b9e08c6d45494315f7a1d done #18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done #18 DONE 0.0s WARNING: failed to get git remote url: fatal: No remote configured to list refs from. make release-manifests make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0 make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' make generate-flavors FLAVOR_DIR=out make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' ... skipping 245 lines ... INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by md-scale-dyvjdr/md-scale-q6fapd to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist [1mSTEP[0m: Dumping logs from the "md-scale-q6fapd" workload cluster Failed to get logs for machine md-scale-q6fapd-drtsb, cluster md-scale-dyvjdr/md-scale-q6fapd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-q6fapd-md-0-c8645864d-hw5hh, cluster md-scale-dyvjdr/md-scale-q6fapd: dialing host IP address at : dial tcp :22: connect: connection refused [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-dyvjdr" namespace [1mSTEP[0m: Deleting cluster md-scale-dyvjdr/md-scale-q6fapd [1mSTEP[0m: Deleting cluster md-scale-q6fapd INFO: Waiting for the Cluster md-scale-dyvjdr/md-scale-q6fapd to be deleted [1mSTEP[0m: Waiting for cluster md-scale-q6fapd to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 64 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-w5rvxt" workload cluster Failed to get logs for machine md-rollout-w5rvxt-md-0-5ff7854b8-bp8xw, cluster md-rollout-i6fwx4/md-rollout-w5rvxt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-w5rvxt-nqkkg, cluster md-rollout-i6fwx4/md-rollout-w5rvxt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-i6fwx4" namespace [1mSTEP[0m: Deleting cluster md-rollout-i6fwx4/md-rollout-w5rvxt [1mSTEP[0m: Deleting cluster md-rollout-w5rvxt INFO: Waiting for the Cluster md-rollout-i6fwx4/md-rollout-w5rvxt to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-w5rvxt to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 62 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-oa18fh" workload cluster Failed to get logs for machine clusterclass-changes-oa18fh-4rqrf-g9k7c, cluster clusterclass-changes-i5ljub/clusterclass-changes-oa18fh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine clusterclass-changes-oa18fh-md-0-vlncq-7d649dbb5f-rz5l9, cluster clusterclass-changes-i5ljub/clusterclass-changes-oa18fh: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-oa18fh-md-0-vlncq-bb6f4d4b4-x9trd, cluster clusterclass-changes-i5ljub/clusterclass-changes-oa18fh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-i5ljub" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-i5ljub/clusterclass-changes-oa18fh [1mSTEP[0m: Deleting cluster clusterclass-changes-oa18fh INFO: Waiting for the Cluster clusterclass-changes-i5ljub/clusterclass-changes-oa18fh to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-oa18fh to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 113 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-60k0p1-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-60k0p1" workload cluster Failed to get logs for machine quick-start-60k0p1-md-0-855fc5b8d-bnsjt, cluster quick-start-celq4s/quick-start-60k0p1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-60k0p1-svg4g, cluster quick-start-celq4s/quick-start-60k0p1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-celq4s" namespace [1mSTEP[0m: Deleting cluster quick-start-celq4s/quick-start-60k0p1 [1mSTEP[0m: Deleting cluster quick-start-60k0p1 INFO: Waiting for the Cluster quick-start-celq4s/quick-start-60k0p1 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-60k0p1 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 46 lines ... INFO: Waiting for the first control plane machine managed by node-drain-74avrk/node-drain-y2zo9i to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by node-drain-74avrk/node-drain-y2zo9i to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist [1mSTEP[0m: Dumping logs from the "node-drain-y2zo9i" workload cluster Failed to get logs for machine node-drain-y2zo9i-h6nvb, cluster node-drain-74avrk/node-drain-y2zo9i: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine node-drain-y2zo9i-md-0-6c4b6b4dfd-f6zvp, cluster node-drain-74avrk/node-drain-y2zo9i: dialing host IP address at : dial tcp :22: connect: connection refused [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-74avrk" namespace [1mSTEP[0m: Deleting cluster node-drain-74avrk/node-drain-y2zo9i [1mSTEP[0m: Deleting cluster node-drain-y2zo9i INFO: Waiting for the Cluster node-drain-74avrk/node-drain-y2zo9i to be deleted [1mSTEP[0m: Waiting for cluster node-drain-y2zo9i to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 55 lines ... INFO: Waiting for control plane quick-start-3tr7p3/quick-start-p2mpfw-zc5lp to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready [1mSTEP[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Dumping logs from the "quick-start-p2mpfw" workload cluster Failed to get logs for machine quick-start-p2mpfw-md-0-wf2mz-95d6cc4c-bbbfq, cluster quick-start-3tr7p3/quick-start-p2mpfw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-p2mpfw-zc5lp-qjk5v, cluster quick-start-3tr7p3/quick-start-p2mpfw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-3tr7p3" namespace [1mSTEP[0m: Deleting cluster quick-start-3tr7p3/quick-start-p2mpfw [1mSTEP[0m: Deleting cluster quick-start-p2mpfw INFO: Waiting for the Cluster quick-start-3tr7p3/quick-start-p2mpfw to be deleted [1mSTEP[0m: Waiting for cluster quick-start-p2mpfw to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 124 lines ... INFO: Waiting for control plane mhc-remediation-swksto/mhc-remediation-fx1um1 to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready [1mSTEP[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Dumping logs from the "mhc-remediation-fx1um1" workload cluster Failed to get logs for machine mhc-remediation-fx1um1-md-0-85867bc9f7-8ps9m, cluster mhc-remediation-swksto/mhc-remediation-fx1um1: dialing host IP address at 192.168.6.63: dial tcp 192.168.6.63:22: connect: no route to host Failed to get logs for machine mhc-remediation-fx1um1-mxcbz, cluster mhc-remediation-swksto/mhc-remediation-fx1um1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-swksto" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-swksto/mhc-remediation-fx1um1 [1mSTEP[0m: Deleting cluster mhc-remediation-fx1um1 INFO: Waiting for the Cluster mhc-remediation-swksto/mhc-remediation-fx1um1 to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-fx1um1 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 52 lines ... INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by mhc-remediation-10sfa0/mhc-remediation-hcw4t3 to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by mhc-remediation-10sfa0/mhc-remediation-hcw4t3 to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2023-01-27T20:15:11Z"} ++ early_exit_handler ++ '[' -n 161 ']' ++ kill -TERM 161 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' Cleaning up after docker Releasing IP claims ++ docker ps -aq ++ xargs -r docker rm -f 5e54110f47db 949f999d1499 ++ service docker stop Stopping Docker: dockerINFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ++ true + EXIT_VALUE=130 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerINFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused INFO: Failed to list the machines: Get "https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D": dial tcp 127.0.0.1:44843: connect: connection refused [1mSTEP[0m: Dumping logs from the "mhc-remediation-hcw4t3" workload cluster Unable to connect to the server: dial tcp 192.168.6.161:6443: i/o timeout [91m[1m• Failure [861.741 seconds][0m When testing unhealthy machines remediation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/mhc_remediation_test.go:24[0m [91m[1mShould successfully trigger KCP remediation [It][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/e2e/mhc_remediations.go:116[0m [91mTimed out after 600.000s. Error: Unexpected non-nil/non-zero argument at index 1: <*url.Error>: &url.Error{Op:"Get", URL:"https://127.0.0.1:44843/apis/cluster.x-k8s.io/v1beta1/namespaces/mhc-remediation-10sfa0/machines?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dmhc-remediation-hcw4t3%2Ccluster.x-k8s.io%2Fcontrol-plane%3D", Err:(*net.OpError)(0xc0029b6af0)}[0m /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:116 [90m------------------------------[0m [0mLabel nodes with ESXi host info[0m [1mcreates a workload cluster whose nodes have the ESXi host info[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/node_labeling_test.go:52[0m ... skipping 2 lines ... [91m[1m• Failure in Spec Setup (BeforeEach) [60.004 seconds][0m Label nodes with ESXi host info [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/node_labeling_test.go:38[0m [91m[1mcreates a workload cluster whose nodes have the ESXi host info [BeforeEach][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/node_labeling_test.go:52[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc00151c6c0>: { Op: "Get", URL: "https://127.0.0.1:44843/api?timeout=32s", Err: <*net.OpError | 0xc0017ef9f0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 25 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:53[0m when Creating a cluster with DHCPOverrides configured [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:54[0m [91m[1mOnly configures the network with the provided nameservers [BeforeEach][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:66[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc00296ad50>: { Op: "Get", URL: "https://127.0.0.1:44843/api?timeout=32s", Err: <*net.OpError | 0xc0023fe640>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 21 lines ... [91m[1m• Failure in Spec Setup (BeforeEach) [60.004 seconds][0m Cluster creation with storage policy [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:45[0m [91m[1mshould create a cluster successfully [BeforeEach][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc000f82750>: { Op: "Get", URL: "https://127.0.0.1:44843/api?timeout=32s", Err: <*net.OpError | 0xc002a794a0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 21 lines ... [91m[1m• Failure in Spec Setup (BeforeEach) [60.004 seconds][0m Cluster creation with [Ignition] bootstrap [PR-Blocking] [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/ignition_test.go:25[0m [91m[1mShould create a workload cluster [BeforeEach][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/e2e/quick_start.go:78[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc000f3d0e0>: { Op: "Get", URL: "https://127.0.0.1:44843/api?timeout=32s", Err: <*net.OpError | 0xc00294a6e0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 11 lines ... Get "https://127.0.0.1:44843/api?timeout=32s": dial tcp 127.0.0.1:44843: connect: connection refused occurred[0m /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/cluster_proxy.go:188 [90m------------------------------[0m [1mSTEP[0m: Cleaning up the vSphere session {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-27T20:30:11Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-27T20:30:11Z"}