This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 175 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:27ae03709d1284232610f0b0626faa4ba6d909b0b669ce05f9096ab2ced66ed2 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][ 0.0 B/ 74.6 MiB]
-
- [1 files][ 74.6 MiB/ 74.6 MiB]
Operation completed over 1 objects/74.6 MiB.
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 135 lines ...
#18 exporting to image
#18 exporting layers done
#18 writing image sha256:27ae03709d1284232610f0b0626faa4ba6d909b0b669ce05f9096ab2ced66ed2 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 246 lines ...
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capv-e2e-s1fuq1/storage-policy-d4t9aa to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/31/23 17:31:44.837[0m
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capv-e2e-s1fuq1/storage-policy-d4t9aa to be ready (implies underlying nodes to be ready as well)
[1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/31/23 17:36:45.036[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:176 [38;5;243m@ 01/31/23 17:46:45.041[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "capv-e2e-s1fuq1" namespace [38;5;243m@ 01/31/23 17:46:45.041[0m
[1mSTEP:[0m cleaning up namespace: capv-e2e-s1fuq1 [38;5;243m@ 01/31/23 17:46:45.308[0m
[1mSTEP:[0m Deleting cluster storage-policy-d4t9aa [38;5;243m@ 01/31/23 17:46:45.327[0m
INFO: Waiting for the Cluster capv-e2e-s1fuq1/storage-policy-d4t9aa to be deleted
[1mSTEP:[0m Waiting for cluster storage-policy-d4t9aa to be deleted [38;5;243m@ 01/31/23 17:46:45.342[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/31/23 17:47:05.357[0m
INFO: Deleting namespace capv-e2e-s1fuq1
[38;5;9m• [FAILED] [953.183 seconds][0m
[0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.003s.
{
"metadata": {
"creationTimestamp": null
},
"spec": {
"version": "",
... skipping 58 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/31/23 17:47:06.593[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-yqu6wf/quick-start-63iaid-tw9nl to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/31/23 17:47:26.647[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/31/23 17:57:26.652[0m
[1mSTEP:[0m Dumping logs from the "quick-start-63iaid" workload cluster [38;5;243m@ 01/31/23 17:57:26.652[0m
Failed to get logs for Machine quick-start-63iaid-md-0-mwz2k-84fc44d75d-mw5hb, Cluster quick-start-yqu6wf/quick-start-63iaid: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-63iaid-tw9nl-hrfxq, Cluster quick-start-yqu6wf/quick-start-63iaid: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-yqu6wf" namespace [38;5;243m@ 01/31/23 17:57:28.851[0m
[1mSTEP:[0m Deleting cluster quick-start-yqu6wf/quick-start-63iaid [38;5;243m@ 01/31/23 17:57:29.141[0m
[1mSTEP:[0m Deleting cluster quick-start-63iaid [38;5;243m@ 01/31/23 17:57:29.162[0m
INFO: Waiting for the Cluster quick-start-yqu6wf/quick-start-63iaid to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-63iaid to be deleted [38;5;243m@ 01/31/23 17:57:29.179[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/31/23 17:57:49.195[0m
INFO: Deleting namespace quick-start-yqu6wf
[38;5;9m• [FAILED] [643.832 seconds][0m
[0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.004s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/31/23 17:57:26.652[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/31/23 17:57:50.443[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by clusterclass-changes-fyolbo/clusterclass-changes-fjjqec-dpjnp to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/31/23 17:58:10.5[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/31/23 18:08:10.502[0m
[1mSTEP:[0m Dumping logs from the "clusterclass-changes-fjjqec" workload cluster [38;5;243m@ 01/31/23 18:08:10.502[0m
Failed to get logs for Machine clusterclass-changes-fjjqec-dpjnp-nvxg8, Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine clusterclass-changes-fjjqec-md-0-d94z2-6497556597-6bqgg, Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "clusterclass-changes-fyolbo" namespace [38;5;243m@ 01/31/23 18:08:12.759[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec [38;5;243m@ 01/31/23 18:08:13.113[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-fjjqec [38;5;243m@ 01/31/23 18:08:13.134[0m
INFO: Waiting for the Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec to be deleted
[1mSTEP:[0m Waiting for cluster clusterclass-changes-fjjqec to be deleted [38;5;243m@ 01/31/23 18:08:13.146[0m
[1mSTEP:[0m Deleting namespace used for hosting the "clusterclass-changes" test spec [38;5;243m@ 01/31/23 18:08:33.163[0m
INFO: Deleting namespace clusterclass-changes-fyolbo
[38;5;9m• [FAILED] [643.970 seconds][0m
[0mWhen testing ClusterClass changes [ClusterClass] [38;5;9m[1m[It] Should successfully rollout the managed topology upon changes to the ClusterClass[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132[0m
[38;5;9m[FAILED] Timed out after 600.001s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/31/23 18:08:10.502[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/31/23 18:08:34.347[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by md-rollout-aan4vo/md-rollout-kv157p to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/31/23 18:08:54.383[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/31/23 18:18:54.386[0m
[1mSTEP:[0m Dumping logs from the "md-rollout-kv157p" workload cluster [38;5;243m@ 01/31/23 18:18:54.386[0m
Failed to get logs for Machine md-rollout-kv157p-md-0-86497c7b56-njdww, Cluster md-rollout-aan4vo/md-rollout-kv157p: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-rollout-kv157p-mzcvg, Cluster md-rollout-aan4vo/md-rollout-kv157p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-rollout-aan4vo" namespace [38;5;243m@ 01/31/23 18:18:56.552[0m
[1mSTEP:[0m Deleting cluster md-rollout-aan4vo/md-rollout-kv157p [38;5;243m@ 01/31/23 18:18:56.84[0m
[1mSTEP:[0m Deleting cluster md-rollout-kv157p [38;5;243m@ 01/31/23 18:18:56.857[0m
INFO: Waiting for the Cluster md-rollout-aan4vo/md-rollout-kv157p to be deleted
[1mSTEP:[0m Waiting for cluster md-rollout-kv157p to be deleted [38;5;243m@ 01/31/23 18:18:56.87[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-rollout" test spec [38;5;243m@ 01/31/23 18:19:16.885[0m
INFO: Deleting namespace md-rollout-aan4vo
[38;5;9m• [FAILED] [643.724 seconds][0m
[0mClusterAPI Machine Deployment Tests [38;5;243mRunning the MachineDeployment rollout spec [38;5;9m[1m[It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71[0m
[38;5;9m[FAILED] Timed out after 600.002s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/31/23 18:18:54.386[0m
[38;5;243m------------------------------[0m
... skipping 113 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33042 [sync.Cond.Wait, 2 minutes][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c8f30, 0x28}, {0xc0021c8f60, 0x23}, {0xc0021c0704, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 44 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b41400, 0x3e}, {0xc000b41440, 0x39}, {0xc001a21470, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 32993 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 32990 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001da7980, 0x27}, {0xc001da79b0, 0x22}, {0xc002a91e70, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33003 [sync.Cond.Wait, 8 minutes][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0012953c0, 0x3a}, {0xc001295400, 0x35}, {0xc0023b1b20, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 11 lines ...
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 01/31/23 18:27:50.266[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/31/23 18:27:50.481[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.580 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 5 Failures:[0m
[38;5;9m[FAIL][0m [0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:176[0m
[38;5;9m[FAIL][0m [0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mWhen testing ClusterClass changes [ClusterClass] [38;5;9m[1m[It] Should successfully rollout the managed topology upon changes to the ClusterClass[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mClusterAPI Machine Deployment Tests [38;5;243mRunning the MachineDeployment rollout spec [38;5;9m[1m[It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;214m[TIMEDOUT][0m [0mDHCPOverrides configuration test [38;5;243mwhen Creating a cluster with DHCPOverrides configured [38;5;214m[1m[It] Only configures the network with the provided nameservers[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:66[0m
[38;5;9m[1mRan 5 of 17 Specs in 3515.826 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m5 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m11 Skipped[0m
--- FAIL: TestE2E (3515.83s)
FAIL
Ginkgo ran 1 suite in 1h0m22.095758641s
Test Suite Failed
real 60m22.150s
user 9m19.891s
sys 2m38.647s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-a0c1f281d3cd33ef78622f17b496197096595ddd" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-b82287faf9b5fdd5259939056e7f7e7ccde54cba" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...