This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 572 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 17:26:42.925[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-j9adhg/quick-start-cl8usm-xnd8t to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 17:27:12.995[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/23/23 17:37:12.998[0m
[1mSTEP:[0m Dumping logs from the "quick-start-cl8usm" workload cluster [38;5;243m@ 01/23/23 17:37:12.999[0m
Failed to get logs for Machine quick-start-cl8usm-md-0-vdmlh-949ff58fb-jfqwd, Cluster quick-start-j9adhg/quick-start-cl8usm: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-cl8usm-xnd8t-wxhxg, Cluster quick-start-j9adhg/quick-start-cl8usm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-j9adhg" namespace [38;5;243m@ 01/23/23 17:37:15.413[0m
[1mSTEP:[0m Deleting cluster quick-start-j9adhg/quick-start-cl8usm [38;5;243m@ 01/23/23 17:37:16.018[0m
[1mSTEP:[0m Deleting cluster quick-start-cl8usm [38;5;243m@ 01/23/23 17:37:16.06[0m
INFO: Waiting for the Cluster quick-start-j9adhg/quick-start-cl8usm to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-cl8usm to be deleted [38;5;243m@ 01/23/23 17:37:16.087[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/23/23 17:37:36.113[0m
INFO: Deleting namespace quick-start-j9adhg
[38;5;9m• [FAILED] [655.996 seconds][0m
[0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.002s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/23/23 17:37:12.998[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 17:37:38.102[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-60tp7p/quick-start-9p9hon to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 17:37:58.175[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/23/23 17:47:58.176[0m
[1mSTEP:[0m Dumping logs from the "quick-start-9p9hon" workload cluster [38;5;243m@ 01/23/23 17:47:58.176[0m
Failed to get logs for Machine quick-start-9p9hon-md-0-fc97bd7b4-mgp6l, Cluster quick-start-60tp7p/quick-start-9p9hon: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-9p9hon-nn75p, Cluster quick-start-60tp7p/quick-start-9p9hon: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-60tp7p" namespace [38;5;243m@ 01/23/23 17:48:00.49[0m
[1mSTEP:[0m Deleting cluster quick-start-60tp7p/quick-start-9p9hon [38;5;243m@ 01/23/23 17:48:00.807[0m
[1mSTEP:[0m Deleting cluster quick-start-9p9hon [38;5;243m@ 01/23/23 17:48:00.827[0m
INFO: Waiting for the Cluster quick-start-60tp7p/quick-start-9p9hon to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-9p9hon to be deleted [38;5;243m@ 01/23/23 17:48:00.843[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/23/23 17:48:20.859[0m
INFO: Deleting namespace quick-start-60tp7p
[38;5;9m• [FAILED] [644.738 seconds][0m
[0mCluster Creation using Cluster API quick-start test [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/23/23 17:47:58.176[0m
[38;5;243m------------------------------[0m
... skipping 32 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 17:48:30.16[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by hw-upgrade-e2e-5pmlt6/hw-upgrade-60y60e to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 17:49:20.215[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/23/23 17:59:20.216[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "hw-upgrade-e2e-5pmlt6" namespace [38;5;243m@ 01/23/23 17:59:20.216[0m
[1mSTEP:[0m cleaning up namespace: hw-upgrade-e2e-5pmlt6 [38;5;243m@ 01/23/23 17:59:20.56[0m
[1mSTEP:[0m Deleting cluster hw-upgrade-60y60e [38;5;243m@ 01/23/23 17:59:20.585[0m
INFO: Waiting for the Cluster hw-upgrade-e2e-5pmlt6/hw-upgrade-60y60e to be deleted
[1mSTEP:[0m Waiting for cluster hw-upgrade-60y60e to be deleted [38;5;243m@ 01/23/23 17:59:20.601[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/23/23 17:59:40.617[0m
INFO: Deleting namespace hw-upgrade-e2e-5pmlt6
[38;5;9m• [FAILED] [679.756 seconds][0m
[0mHardware version upgrade [38;5;9m[1m[It] creates a cluster with VM hardware versions upgraded[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.001s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/23/23 17:59:20.216[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 17:59:42.092[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capv-e2e-yjxt1x/storage-policy-gf6w1u to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 18:00:02.163[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/23/23 18:10:02.166[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "capv-e2e-yjxt1x" namespace [38;5;243m@ 01/23/23 18:10:02.166[0m
[1mSTEP:[0m cleaning up namespace: capv-e2e-yjxt1x [38;5;243m@ 01/23/23 18:10:02.445[0m
[1mSTEP:[0m Deleting cluster storage-policy-gf6w1u [38;5;243m@ 01/23/23 18:10:02.461[0m
INFO: Waiting for the Cluster capv-e2e-yjxt1x/storage-policy-gf6w1u to be deleted
[1mSTEP:[0m Waiting for cluster storage-policy-gf6w1u to be deleted [38;5;243m@ 01/23/23 18:10:02.475[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/23/23 18:10:22.49[0m
INFO: Deleting namespace capv-e2e-yjxt1x
[38;5;9m• [FAILED] [641.867 seconds][0m
[0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.002s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/23/23 18:10:02.166[0m
[38;5;243m------------------------------[0m
... skipping 36 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 18:10:23.673[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-w06wbr/quick-start-kshjdz to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 18:11:13.737[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/23/23 18:21:13.906[0m
[1mSTEP:[0m Dumping logs from the "quick-start-kshjdz" workload cluster [38;5;243m@ 01/23/23 18:21:13.906[0m
Failed to get logs for Machine quick-start-kshjdz-lppkn, Cluster quick-start-w06wbr/quick-start-kshjdz: dialing host IP address at 192.168.6.127: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-kshjdz-md-0-b9868b7bc-9x4dp, Cluster quick-start-w06wbr/quick-start-kshjdz: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-w06wbr" namespace [38;5;243m@ 01/23/23 18:21:15.244[0m
[1mSTEP:[0m Deleting cluster quick-start-w06wbr/quick-start-kshjdz [38;5;243m@ 01/23/23 18:21:15.533[0m
[1mSTEP:[0m Deleting cluster quick-start-kshjdz [38;5;243m@ 01/23/23 18:21:15.552[0m
INFO: Waiting for the Cluster quick-start-w06wbr/quick-start-kshjdz to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-kshjdz to be deleted [38;5;243m@ 01/23/23 18:21:15.567[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/23/23 18:21:35.58[0m
INFO: Deleting namespace quick-start-w06wbr
[38;5;9m• [FAILED] [672.932 seconds][0m
[0mCluster creation with [Ignition] bootstrap [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.003s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/23/23 18:21:13.906[0m
[38;5;243m------------------------------[0m
... skipping 33 lines ...
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/23/23 18:21:36.63[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by node-drain-rvlttx/node-drain-3wn96g to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/23/23 18:22:26.678[0m
[38;5;214m[TIMEDOUT][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83 [38;5;243m@ 01/23/23 18:23:50.195[0m
[1mSTEP:[0m Dumping logs from the "node-drain-3wn96g" workload cluster [38;5;243m@ 01/23/23 18:23:50.196[0m
Failed to get logs for Machine node-drain-3wn96g-bqsd9, Cluster node-drain-rvlttx/node-drain-3wn96g: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine node-drain-3wn96g-md-0-79788b5b55-h6rlz, Cluster node-drain-rvlttx/node-drain-3wn96g: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-rvlttx" namespace [38;5;243m@ 01/23/23 18:23:50.261[0m
[1mSTEP:[0m Deleting cluster node-drain-rvlttx/node-drain-3wn96g [38;5;243m@ 01/23/23 18:23:50.583[0m
[1mSTEP:[0m Deleting cluster node-drain-3wn96g [38;5;243m@ 01/23/23 18:23:50.601[0m
INFO: Waiting for the Cluster node-drain-rvlttx/node-drain-3wn96g to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-3wn96g to be deleted [38;5;243m@ 01/23/23 18:23:50.61[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/23/23 18:24:10.624[0m
... skipping 92 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 23950 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000c17a40, 0x3a}, {0xc000c17a80, 0x35}, {0xc000d23bc0, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 21 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002487860, 0x28}, {0xc002487890, 0x23}, {0xc0026675b4, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 24238 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 24232 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001085b80, 0x3e}, {0xc001085bc0, 0x39}, {0xc00220f8c0, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 21 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0025b8e70, 0x27}, {0xc0025b8ea0, 0x22}, {0xc0025b0a04, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;243m------------------------------[0m
[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m
... skipping 3 lines ...
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 01/23/23 18:24:10.647[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/23/23 18:24:10.895[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.490 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 6 Failures:[0m
[38;5;9m[FAIL][0m [0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mCluster Creation using Cluster API quick-start test [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mHardware version upgrade [38;5;9m[1m[It] creates a cluster with VM hardware versions upgraded[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mCluster creation with [Ignition] bootstrap [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;214m[TIMEDOUT][0m [0mWhen testing node drain timeout [38;5;214m[1m[It] A node should be forcefully removed if it cannot be drained in time[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83[0m
[38;5;9m[1mRan 6 of 17 Specs in 3563.033 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m6 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m10 Skipped[0m
--- FAIL: TestE2E (3563.03s)
FAIL
Ginkgo ran 1 suite in 1h0m22.039565587s
Test Suite Failed
real 60m22.234s
user 5m45.230s
sys 1m8.791s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-8d120dfe2596a7986202c1849d32f4a4bcf24e54" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-7148687b11cefc6dfbcae7beae16c13a626e93ac" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...