This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 178 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:9964ddd88f1d3ed29329e93e046bbe631305f0d208d0a51b6c6b7ef784ba04ec done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][ 0.0 B/ 74.6 MiB]
-
- [1 files][ 74.6 MiB/ 74.6 MiB]
Operation completed over 1 objects/74.6 MiB.
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 129 lines ...
#18 exporting to image
#18 exporting layers done
#18 writing image sha256:9964ddd88f1d3ed29329e93e046bbe631305f0d208d0a51b6c6b7ef784ba04ec done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 259 lines ...
Discovering machine health check resources
Ensuring there is at least 1 Machine that MachineHealthCheck is matching
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
E0130 17:42:36.810095 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810079 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810115 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810175 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810191 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810210 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810224 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810239 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810256 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810290 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810320 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810338 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810341 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810348 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
E0130 17:42:36.810364 26815 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.3:54720->192.168.6.162:6443: read: connection reset by peer
[1mSTEP:[0m PASSED! [38;5;243m@ 01/30/23 17:42:39.343[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-u7dk8w" workload cluster [38;5;243m@ 01/30/23 17:42:39.344[0m
Failed to get logs for Machine mhc-remediation-u7dk8w-md-0-7c9545d8d4-thgqs, Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-u7dk8w-t8j2j, Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-nhfydo" namespace [38;5;243m@ 01/30/23 17:42:43.585[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w [38;5;243m@ 01/30/23 17:42:43.873[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-u7dk8w [38;5;243m@ 01/30/23 17:42:43.895[0m
INFO: Waiting for the Cluster mhc-remediation-nhfydo/mhc-remediation-u7dk8w to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-u7dk8w to be deleted [38;5;243m@ 01/30/23 17:42:43.91[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/30/23 17:43:13.931[0m
... skipping 35 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 17:43:15.08[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by mhc-remediation-1c5y8b/mhc-remediation-ka36o9 to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 17:44:05.164[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 17:54:05.165[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-ka36o9" workload cluster [38;5;243m@ 01/30/23 17:54:05.166[0m
Failed to get logs for Machine mhc-remediation-ka36o9-chstc, Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-ka36o9-md-0-6cf97bc696-mdx85, Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-1c5y8b" namespace [38;5;243m@ 01/30/23 17:54:07.41[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9 [38;5;243m@ 01/30/23 17:54:07.685[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-ka36o9 [38;5;243m@ 01/30/23 17:54:07.707[0m
INFO: Waiting for the Cluster mhc-remediation-1c5y8b/mhc-remediation-ka36o9 to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-ka36o9 to be deleted [38;5;243m@ 01/30/23 17:54:07.72[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/30/23 17:54:27.734[0m
INFO: Deleting namespace mhc-remediation-1c5y8b
[38;5;9m• [FAILED] [673.802 seconds][0m
[0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 17:54:05.165[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 17:54:28.962[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-c21evg/quick-start-p7ka95-2cclp to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 17:54:49.022[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 18:04:49.026[0m
[1mSTEP:[0m Dumping logs from the "quick-start-p7ka95" workload cluster [38;5;243m@ 01/30/23 18:04:49.026[0m
Failed to get logs for Machine quick-start-p7ka95-2cclp-6tsxb, Cluster quick-start-c21evg/quick-start-p7ka95: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-p7ka95-md-0-ddqwc-dcc6c77b-bldpx, Cluster quick-start-c21evg/quick-start-p7ka95: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-c21evg" namespace [38;5;243m@ 01/30/23 18:04:51.604[0m
[1mSTEP:[0m Deleting cluster quick-start-c21evg/quick-start-p7ka95 [38;5;243m@ 01/30/23 18:04:51.95[0m
[1mSTEP:[0m Deleting cluster quick-start-p7ka95 [38;5;243m@ 01/30/23 18:04:51.972[0m
INFO: Waiting for the Cluster quick-start-c21evg/quick-start-p7ka95 to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-p7ka95 to be deleted [38;5;243m@ 01/30/23 18:04:51.99[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/30/23 18:05:12.01[0m
INFO: Deleting namespace quick-start-c21evg
[38;5;9m• [FAILED] [644.280 seconds][0m
[0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.003s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 18:04:49.026[0m
[38;5;243m------------------------------[0m
... skipping 37 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 18:05:13.465[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by md-rollout-tg4skn/md-rollout-qaoasf to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 18:05:33.513[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 18:15:33.514[0m
[1mSTEP:[0m Dumping logs from the "md-rollout-qaoasf" workload cluster [38;5;243m@ 01/30/23 18:15:33.515[0m
Failed to get logs for Machine md-rollout-qaoasf-2mhgt, Cluster md-rollout-tg4skn/md-rollout-qaoasf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-qaoasf-md-0-6f98b95547-kns98, Cluster md-rollout-tg4skn/md-rollout-qaoasf: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-rollout-tg4skn" namespace [38;5;243m@ 01/30/23 18:15:35.842[0m
[1mSTEP:[0m Deleting cluster md-rollout-tg4skn/md-rollout-qaoasf [38;5;243m@ 01/30/23 18:15:36.15[0m
[1mSTEP:[0m Deleting cluster md-rollout-qaoasf [38;5;243m@ 01/30/23 18:15:36.171[0m
INFO: Waiting for the Cluster md-rollout-tg4skn/md-rollout-qaoasf to be deleted
[1mSTEP:[0m Waiting for cluster md-rollout-qaoasf to be deleted [38;5;243m@ 01/30/23 18:15:36.191[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-rollout" test spec [38;5;243m@ 01/30/23 18:15:56.206[0m
INFO: Deleting namespace md-rollout-tg4skn
[38;5;9m• [FAILED] [644.188 seconds][0m
[0mClusterAPI Machine Deployment Tests [38;5;243mRunning the MachineDeployment rollout spec [38;5;9m[1m[It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 18:15:33.514[0m
[38;5;243m------------------------------[0m
... skipping 33 lines ...
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 18:15:57.753[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by clusterclass-changes-xphail/clusterclass-changes-uvqvb6-cdndr to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 18:16:17.809[0m
[38;5;214m[TIMEDOUT][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132 [38;5;243m@ 01/30/23 18:25:35.537[0m
[1mSTEP:[0m Dumping logs from the "clusterclass-changes-uvqvb6" workload cluster [38;5;243m@ 01/30/23 18:25:35.539[0m
Failed to get logs for Machine clusterclass-changes-uvqvb6-cdndr-kjtzr, Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine clusterclass-changes-uvqvb6-md-0-5r472-76b99b5845-gr57d, Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "clusterclass-changes-xphail" namespace [38;5;243m@ 01/30/23 18:25:37.979[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6 [38;5;243m@ 01/30/23 18:25:38.274[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-uvqvb6 [38;5;243m@ 01/30/23 18:25:38.296[0m
INFO: Waiting for the Cluster clusterclass-changes-xphail/clusterclass-changes-uvqvb6 to be deleted
[1mSTEP:[0m Waiting for cluster clusterclass-changes-uvqvb6 to be deleted [38;5;243m@ 01/30/23 18:25:38.307[0m
[1mSTEP:[0m Deleting namespace used for hosting the "clusterclass-changes" test spec [38;5;243m@ 01/30/23 18:25:58.322[0m
... skipping 69 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33116 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33258 [sync.Cond.Wait, 3 minutes][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001637100, 0x3e}, {0xc001637140, 0x39}, {0xc00287eba0, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 21 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002591260, 0x27}, {0xc002591290, 0x22}, {0xc0016d8160, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33120 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001dba960, 0x28}, {0xc001dba990, 0x23}, {0xc0027c71a4, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 29 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 33251 [sync.Cond.Wait, 9 minutes][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001522fc0, 0x3a}, {0xc001523000, 0x35}, {0xc0025d3e80, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 01/30/23 18:25:58.342[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/30/23 18:25:58.557[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.654 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 4 Failures:[0m
[38;5;9m[FAIL][0m [0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mClusterAPI Machine Deployment Tests [38;5;243mRunning the MachineDeployment rollout spec [38;5;9m[1m[It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;214m[TIMEDOUT][0m [0mWhen testing ClusterClass changes [ClusterClass] [38;5;214m[1m[It] Should successfully rollout the managed topology upon changes to the ClusterClass[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132[0m
[38;5;9m[1mRan 5 of 17 Specs in 3569.946 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m1 Passed[0m | [38;5;9m[1m4 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m11 Skipped[0m
--- FAIL: TestE2E (3569.95s)
FAIL
Ginkgo ran 1 suite in 1h0m24.550358523s
Test Suite Failed
real 60m24.571s
user 5m38.724s
sys 1m5.591s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-5176910efa514d2557c75aae47a121064853acef" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-09cf826415f5ee6b27de3a287a02b0335a52d4d8" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...