This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 165 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:a7788487d66aa654b4dea16a73d4c06770357c39a4f9298038e648272b59121b done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][ 0.0 B/ 74.6 MiB]
-
- [1 files][ 74.6 MiB/ 74.6 MiB]
\
Operation completed over 1 objects/74.6 MiB.
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...
#18 exporting to image
#18 exporting layers done
#18 writing image sha256:a7788487d66aa654b4dea16a73d4c06770357c39a4f9298038e648272b59121b done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 261 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 17:34:46.247[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-cdfkdq" workload cluster [38;5;243m@ 01/25/23 17:34:46.247[0m
Failed to get logs for Machine mhc-remediation-cdfkdq-cbb54, Cluster mhc-remediation-agizhe/mhc-remediation-cdfkdq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-cdfkdq-md-0-77dd6fff-d44t8, Cluster mhc-remediation-agizhe/mhc-remediation-cdfkdq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-agizhe" namespace [38;5;243m@ 01/25/23 17:34:50.841[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-agizhe/mhc-remediation-cdfkdq [38;5;243m@ 01/25/23 17:34:51.136[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-cdfkdq [38;5;243m@ 01/25/23 17:34:51.161[0m
INFO: Waiting for the Cluster mhc-remediation-agizhe/mhc-remediation-cdfkdq to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-cdfkdq to be deleted [38;5;243m@ 01/25/23 17:34:51.176[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/25/23 17:35:21.196[0m
... skipping 54 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 17:45:29.746[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-7bwofn" workload cluster [38;5;243m@ 01/25/23 17:45:29.746[0m
Failed to get logs for Machine mhc-remediation-7bwofn-jrptn, Cluster mhc-remediation-becc2f/mhc-remediation-7bwofn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-7bwofn-md-0-84cb8db5fd-xzbk5, Cluster mhc-remediation-becc2f/mhc-remediation-7bwofn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-7bwofn-qfk97, Cluster mhc-remediation-becc2f/mhc-remediation-7bwofn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-7bwofn-qst5b, Cluster mhc-remediation-becc2f/mhc-remediation-7bwofn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-becc2f" namespace [38;5;243m@ 01/25/23 17:45:37.484[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-becc2f/mhc-remediation-7bwofn [38;5;243m@ 01/25/23 17:45:37.842[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-7bwofn [38;5;243m@ 01/25/23 17:45:37.861[0m
INFO: Waiting for the Cluster mhc-remediation-becc2f/mhc-remediation-7bwofn to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-7bwofn to be deleted [38;5;243m@ 01/25/23 17:45:37.878[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/25/23 17:46:17.902[0m
... skipping 108 lines ...
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m Scaling the MachineDeployment down to 1 [38;5;243m@ 01/25/23 17:55:58.442[0m
INFO: Scaling machine deployment md-scale-i9tjpt/md-scale-3dqzy4-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 17:56:08.578[0m
[1mSTEP:[0m Dumping logs from the "md-scale-3dqzy4" workload cluster [38;5;243m@ 01/25/23 17:56:08.578[0m
Failed to get logs for Machine md-scale-3dqzy4-md-0-57764dfcc6-v9pcb, Cluster md-scale-i9tjpt/md-scale-3dqzy4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-3dqzy4-nmbhd, Cluster md-scale-i9tjpt/md-scale-3dqzy4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-i9tjpt" namespace [38;5;243m@ 01/25/23 17:56:13.04[0m
[1mSTEP:[0m Deleting cluster md-scale-i9tjpt/md-scale-3dqzy4 [38;5;243m@ 01/25/23 17:56:13.368[0m
[1mSTEP:[0m Deleting cluster md-scale-3dqzy4 [38;5;243m@ 01/25/23 17:56:13.387[0m
INFO: Waiting for the Cluster md-scale-i9tjpt/md-scale-3dqzy4 to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-3dqzy4 to be deleted [38;5;243m@ 01/25/23 17:56:13.402[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-scale" test spec [38;5;243m@ 01/25/23 17:56:43.426[0m
... skipping 99 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/25/23 18:02:08.675[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-boh0rw-md-0-ppvh4 are in the "<None>" failure domain [38;5;243m@ 01/25/23 18:03:08.766[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 18:03:08.814[0m
[1mSTEP:[0m Dumping logs from the "quick-start-boh0rw" workload cluster [38;5;243m@ 01/25/23 18:03:08.814[0m
Failed to get logs for Machine quick-start-boh0rw-chgxk-g6r2r, Cluster quick-start-hg8lm0/quick-start-boh0rw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-boh0rw-md-0-ppvh4-677d6899d5-6ccnm, Cluster quick-start-hg8lm0/quick-start-boh0rw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-hg8lm0" namespace [38;5;243m@ 01/25/23 18:03:13.471[0m
[1mSTEP:[0m Deleting cluster quick-start-hg8lm0/quick-start-boh0rw [38;5;243m@ 01/25/23 18:03:13.84[0m
[1mSTEP:[0m Deleting cluster quick-start-boh0rw [38;5;243m@ 01/25/23 18:03:13.863[0m
INFO: Waiting for the Cluster quick-start-hg8lm0/quick-start-boh0rw to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-boh0rw to be deleted [38;5;243m@ 01/25/23 18:03:13.874[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/25/23 18:03:43.897[0m
... skipping 55 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 18:09:05.54[0m
[1mSTEP:[0m Dumping logs from the "md-rollout-2wzvbu" workload cluster [38;5;243m@ 01/25/23 18:09:05.54[0m
Failed to get logs for Machine md-rollout-2wzvbu-md-0-f67957b96-72w2d, Cluster md-rollout-wkkdzi/md-rollout-2wzvbu: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-2wzvbu-wblnp, Cluster md-rollout-wkkdzi/md-rollout-2wzvbu: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-rollout-wkkdzi" namespace [38;5;243m@ 01/25/23 18:09:09.924[0m
[1mSTEP:[0m Deleting cluster md-rollout-wkkdzi/md-rollout-2wzvbu [38;5;243m@ 01/25/23 18:09:10.229[0m
[1mSTEP:[0m Deleting cluster md-rollout-2wzvbu [38;5;243m@ 01/25/23 18:09:10.248[0m
INFO: Waiting for the Cluster md-rollout-wkkdzi/md-rollout-2wzvbu to be deleted
[1mSTEP:[0m Waiting for cluster md-rollout-2wzvbu to be deleted [38;5;243m@ 01/25/23 18:09:10.263[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-rollout" test spec [38;5;243m@ 01/25/23 18:09:40.284[0m
... skipping 56 lines ...
[1mSTEP:[0m Waiting for deployment node-drain-cpa9rd-unevictable-workload/unevictable-pod-49b to be available [38;5;243m@ 01/25/23 18:17:27.254[0m
[1mSTEP:[0m Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. [38;5;243m@ 01/25/23 18:17:37.575[0m
INFO: Scaling controlplane node-drain-cpa9rd/node-drain-925gs9 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 01/25/23 18:21:28.25[0m
[1mSTEP:[0m Dumping logs from the "node-drain-925gs9" workload cluster [38;5;243m@ 01/25/23 18:21:28.25[0m
Failed to get logs for Machine node-drain-925gs9-svl46, Cluster node-drain-cpa9rd/node-drain-925gs9: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-cpa9rd" namespace [38;5;243m@ 01/25/23 18:21:30.279[0m
[1mSTEP:[0m Deleting cluster node-drain-cpa9rd/node-drain-925gs9 [38;5;243m@ 01/25/23 18:21:30.601[0m
[1mSTEP:[0m Deleting cluster node-drain-925gs9 [38;5;243m@ 01/25/23 18:21:30.623[0m
INFO: Waiting for the Cluster node-drain-cpa9rd/node-drain-925gs9 to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-925gs9 to be deleted [38;5;243m@ 01/25/23 18:21:30.636[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/25/23 18:21:50.652[0m
... skipping 42 lines ...
[1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/25/23 18:24:31.987[0m
[1mSTEP:[0m Checking all the control plane machines are in the expected failure domains [38;5;243m@ 01/25/23 18:24:42.001[0m
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/25/23 18:24:42.023[0m
[38;5;214m[TIMEDOUT][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78 [38;5;243m@ 01/25/23 18:25:06.779[0m
[1mSTEP:[0m Dumping logs from the "quick-start-x09556" workload cluster [38;5;243m@ 01/25/23 18:25:06.781[0m
Failed to get logs for Machine quick-start-x09556-md-0-65f97846dd-q4dnn, Cluster quick-start-e3gi5g/quick-start-x09556: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-x09556-wntnr, Cluster quick-start-e3gi5g/quick-start-x09556: dialing host IP address at 192.168.6.84: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-e3gi5g" namespace [38;5;243m@ 01/25/23 18:25:08.075[0m
[1mSTEP:[0m Deleting cluster quick-start-e3gi5g/quick-start-x09556 [38;5;243m@ 01/25/23 18:25:08.376[0m
[1mSTEP:[0m Deleting cluster quick-start-x09556 [38;5;243m@ 01/25/23 18:25:08.397[0m
INFO: Waiting for the Cluster quick-start-e3gi5g/quick-start-x09556 to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-x09556 to be deleted [38;5;243m@ 01/25/23 18:25:08.412[0m
[38;5;214m[TIMEDOUT][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:109 [38;5;243m@ 01/25/23 18:25:36.781[0m
... skipping 76 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000ef0100, 0x3a}, {0xc000ef0140, 0x35}, {0xc0005e3500, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 25005 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0024a1ec0, 0x28}, {0xc0024a1ef0, 0x23}, {0xc0022d00b0, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 25141 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002659170, 0x27}, {0xc0026591a0, 0x22}, {0xc001f12a64, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 76961 [chan receive, 3 minutes][0m
... skipping 26 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 25170 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002189180, 0x3e}, {0xc0021891c0, 0x39}, {0xc0023e11d0, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 12 lines ...
[38;5;9m[1mSummarizing 1 Failure:[0m
[38;5;214m[TIMEDOUT][0m [0mCluster creation with [Ignition] bootstrap [PR-Blocking] [38;5;214m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[1mRan 9 of 17 Specs in 3575.322 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m8 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m7 Skipped[0m
--- FAIL: TestE2E (3575.32s)
FAIL
Ginkgo ran 1 suite in 1h0m31.606954499s
Test Suite Failed
real 60m31.629s
user 6m3.513s
sys 1m18.829s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-c02f8741f746fccc25b4eef1d041f41b807086d6" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-d420737b8734c196d82d295bc71dcaa6561bfb3e" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...