This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-30 17:21
Elapsed1h4m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 178 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:9964ddd88f1d3ed29329e93e046bbe631305f0d208d0a51b6c6b7ef784ba04ec done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 129 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:9964ddd88f1d3ed29329e93e046bbe631305f0d208d0a51b6c6b7ef784ba04ec done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 259 lines ...
  Discovering machine health check resources
  Ensuring there is at least 1 Machine that MachineHealthCheck is matching
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
E0130 17:42:36.810095   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810079   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810115   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810175   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810191   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810210   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810224   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810239   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810256   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810290   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810320   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810338   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810341   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810348   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810364   26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
  STEP: PASSED! @ 01/30/23 17:42:39.343
  STEP: Dumping logs from the "mhc-remediation-u7dk8w" workload cluster @ 01/30/23 17:42:39.344
Failed to get logs for Machine mhc-remediation-u7dk8w-md-0-7c9545d8d4-thgqs, Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-u7dk8w-t8j2j, Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-nhfydo" namespace @ 01/30/23 17:42:43.585
  STEP: Deleting cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w @ 01/30/23 17:42:43.873
  STEP: Deleting cluster mhc-remediation-u7dk8w @ 01/30/23 17:42:43.895
  INFO: Waiting for the Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w to be deleted
  STEP: Waiting for cluster mhc-remediation-u7dk8w to be deleted @ 01/30/23 17:42:43.91
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/30/23 17:43:13.931
... skipping 35 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 17:43:15.08
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-1c5y8b/mhc-remediation-ka36o9 to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 17:44:05.164
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 17:54:05.165
  STEP: Dumping logs from the "mhc-remediation-ka36o9" workload cluster @ 01/30/23 17:54:05.166
Failed to get logs for Machine mhc-remediation-ka36o9-chstc, Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-ka36o9-md-0-6cf97bc696-mdx85, Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-1c5y8b" namespace @ 01/30/23 17:54:07.41
  STEP: Deleting cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9 @ 01/30/23 17:54:07.685
  STEP: Deleting cluster mhc-remediation-ka36o9 @ 01/30/23 17:54:07.707
  INFO: Waiting for the Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9 to be deleted
  STEP: Waiting for cluster mhc-remediation-ka36o9 to be deleted @ 01/30/23 17:54:07.72
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/30/23 17:54:27.734
  INFO: Deleting namespace mhc-remediation-1c5y8b
• [FAILED] [673.802 seconds]
When testing unhealthy machines remediation [It] Should successfully trigger KCP remediation
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 17:54:05.165
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 17:54:28.962
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-c21evg/quick-start-p7ka95-2cclp to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 17:54:49.022
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 18:04:49.026
  STEP: Dumping logs from the "quick-start-p7ka95" workload cluster @ 01/30/23 18:04:49.026
Failed to get logs for Machine quick-start-p7ka95-2cclp-6tsxb, Cluster quick-start-c21evg/quick-start-p7ka95: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-p7ka95-md-0-ddqwc-dcc6c77b-bldpx, Cluster quick-start-c21evg/quick-start-p7ka95: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "quick-start-c21evg" namespace @ 01/30/23 18:04:51.604
  STEP: Deleting cluster quick-start-c21evg/quick-start-p7ka95 @ 01/30/23 18:04:51.95
  STEP: Deleting cluster quick-start-p7ka95 @ 01/30/23 18:04:51.972
  INFO: Waiting for the Cluster quick-start-c21evg/quick-start-p7ka95 to be deleted
  STEP: Waiting for cluster quick-start-p7ka95 to be deleted @ 01/30/23 18:04:51.99
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/30/23 18:05:12.01
  INFO: Deleting namespace quick-start-c21evg
• [FAILED] [644.280 seconds]
ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.003s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 18:04:49.026
------------------------------
... skipping 37 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 18:05:13.465
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by md-rollout-tg4skn/md-rollout-qaoasf to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 18:05:33.513
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 18:15:33.514
  STEP: Dumping logs from the "md-rollout-qaoasf" workload cluster @ 01/30/23 18:15:33.515
Failed to get logs for Machine md-rollout-qaoasf-2mhgt, Cluster md-rollout-tg4skn/md-rollout-qaoasf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-qaoasf-md-0-6f98b95547-kns98, Cluster md-rollout-tg4skn/md-rollout-qaoasf: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "md-rollout-tg4skn" namespace @ 01/30/23 18:15:35.842
  STEP: Deleting cluster md-rollout-tg4skn/md-rollout-qaoasf @ 01/30/23 18:15:36.15
  STEP: Deleting cluster md-rollout-qaoasf @ 01/30/23 18:15:36.171
  INFO: Waiting for the Cluster md-rollout-tg4skn/md-rollout-qaoasf to be deleted
  STEP: Waiting for cluster md-rollout-qaoasf to be deleted @ 01/30/23 18:15:36.191
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/30/23 18:15:56.206
  INFO: Deleting namespace md-rollout-tg4skn
• [FAILED] [644.188 seconds]
ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 18:15:33.514
------------------------------
... skipping 33 lines ...
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 18:15:57.753
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by clusterclass-changes-xphail/clusterclass-changes-uvqvb6-cdndr to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 18:16:17.809
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132 @ 01/30/23 18:25:35.537
  STEP: Dumping logs from the "clusterclass-changes-uvqvb6" workload cluster @ 01/30/23 18:25:35.539
Failed to get logs for Machine clusterclass-changes-uvqvb6-cdndr-kjtzr, Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine clusterclass-changes-uvqvb6-md-0-5r472-76b99b5845-gr57d, Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-xphail" namespace @ 01/30/23 18:25:37.979
  STEP: Deleting cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6 @ 01/30/23 18:25:38.274
  STEP: Deleting cluster clusterclass-changes-uvqvb6 @ 01/30/23 18:25:38.296
  INFO: Waiting for the Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6 to be deleted
  STEP: Waiting for cluster clusterclass-changes-uvqvb6 to be deleted @ 01/30/23 18:25:38.307
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/30/23 18:25:58.322
... skipping 69 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33116 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33258 [sync.Cond.Wait, 3 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001637100, 0x3e}, {0xc001637140, 0x39}, {0xc00287eba0, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002591260, 0x27}, {0xc002591290, 0x22}, {0xc0016d8160, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33120 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001dba960, 0x28}, {0xc001dba990, 0x23}, {0xc0027c71a4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 29 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33251 [sync.Cond.Wait, 9 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001522fc0, 0x3a}, {0xc001523000, 0x35}, {0xc0025d3e80, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
  STEP: Cleaning up the vSphere session @ 01/30/23 18:25:58.342
  STEP: Tearing down the management cluster @ 01/30/23 18:25:58.557
[SynchronizedAfterSuite] PASSED [1.654 seconds]
------------------------------

Summarizing 4 Failures:
  [FAIL] When testing unhealthy machines remediation [It] Should successfully trigger KCP remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132

Ran 5 of 17 Specs in 3569.946 seconds
FAIL! - Suite Timeout Elapsed -- 1 Passed | 4 Failed | 1 Pending | 11 Skipped
--- FAIL: TestE2E (3569.95s)
FAIL

Ginkgo ran 1 suite in 1h0m24.550358523s

Test Suite Failed

real	60m24.571s
user	5m38.724s
sys	1m5.591s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-5176910efa514d2557c75aae47a121064853acef" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-09cf826415f5ee6b27de3a287a02b0335a52d4d8" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...