This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 547 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/21/23 05:24:11.618[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by mhc-remediation-tfseq0/mhc-remediation-czf156 to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/21/23 05:24:41.66[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/21/23 05:34:41.661[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-czf156" workload cluster [38;5;243m@ 01/21/23 05:34:41.661[0m
Failed to get logs for Machine mhc-remediation-czf156-md-0-74b6d6fc99-hzvqt, Cluster mhc-remediation-tfseq0/mhc-remediation-czf156: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-czf156-p6ltw, Cluster mhc-remediation-tfseq0/mhc-remediation-czf156: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-tfseq0" namespace [38;5;243m@ 01/21/23 05:34:43.812[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-tfseq0/mhc-remediation-czf156 [38;5;243m@ 01/21/23 05:34:44.1[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-czf156 [38;5;243m@ 01/21/23 05:34:44.119[0m
INFO: Waiting for the Cluster mhc-remediation-tfseq0/mhc-remediation-czf156 to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-czf156 to be deleted [38;5;243m@ 01/21/23 05:34:44.134[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/21/23 05:35:04.148[0m
INFO: Deleting namespace mhc-remediation-tfseq0
[38;5;9m• [FAILED] [655.160 seconds][0m
[0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger machine deployment remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83[0m
[38;5;9m[FAILED] Timed out after 600.001s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/21/23 05:34:41.661[0m
[38;5;243m------------------------------[0m
... skipping 32 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/21/23 05:35:05.358[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by mhc-remediation-ef5t81/mhc-remediation-hz506m to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/21/23 05:35:55.412[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/21/23 05:45:55.414[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-hz506m" workload cluster [38;5;243m@ 01/21/23 05:45:55.414[0m
Failed to get logs for Machine mhc-remediation-hz506m-md-0-767585bcf9-7vbnj, Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-hz506m-vnr5f, Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-ef5t81" namespace [38;5;243m@ 01/21/23 05:45:57.721[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-ef5t81/mhc-remediation-hz506m [38;5;243m@ 01/21/23 05:45:58.046[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-hz506m [38;5;243m@ 01/21/23 05:45:58.066[0m
INFO: Waiting for the Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-hz506m to be deleted [38;5;243m@ 01/21/23 05:45:58.079[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/21/23 05:46:18.095[0m
INFO: Deleting namespace mhc-remediation-ef5t81
[38;5;9m• [FAILED] [673.945 seconds][0m
[0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116[0m
[38;5;9m[FAILED] Timed out after 600.001s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/21/23 05:45:55.414[0m
[38;5;243m------------------------------[0m
... skipping 34 lines ...
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by node-drain-usrxj1/node-drain-s6zk0x to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/21/23 05:47:09.232[0m
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by node-drain-usrxj1/node-drain-s6zk0x to be provisioned
[1mSTEP:[0m Waiting for all control plane nodes to exist [38;5;243m@ 01/21/23 05:49:09.323[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117 [38;5;243m@ 01/21/23 05:59:09.324[0m
[1mSTEP:[0m Dumping logs from the "node-drain-s6zk0x" workload cluster [38;5;243m@ 01/21/23 05:59:09.324[0m
Failed to get logs for Machine node-drain-s6zk0x-gd49j, Cluster node-drain-usrxj1/node-drain-s6zk0x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-s6zk0x-md-0-7d556fdb9c-h6rqq, Cluster node-drain-usrxj1/node-drain-s6zk0x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-usrxj1" namespace [38;5;243m@ 01/21/23 05:59:14.291[0m
[1mSTEP:[0m Deleting cluster node-drain-usrxj1/node-drain-s6zk0x [38;5;243m@ 01/21/23 05:59:14.604[0m
[1mSTEP:[0m Deleting cluster node-drain-s6zk0x [38;5;243m@ 01/21/23 05:59:14.627[0m
INFO: Waiting for the Cluster node-drain-usrxj1/node-drain-s6zk0x to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-s6zk0x to be deleted [38;5;243m@ 01/21/23 05:59:14.644[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/21/23 06:00:04.674[0m
INFO: Deleting namespace node-drain-usrxj1
[38;5;9m• [FAILED] [826.581 seconds][0m
[0mWhen testing node drain timeout [38;5;9m[1m[It] A node should be forcefully removed if it cannot be drained in time[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83[0m
[38;5;9m[FAILED] Timed out after 600.000s.
Timed out waiting for 3 control plane machines to exist
Expected
<int>: 1
to equal
<int>: 3[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117[0m [38;5;243m@ 01/21/23 05:59:09.324[0m
... skipping 32 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/21/23 06:00:05.958[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by md-scale-69qktu/md-scale-1n4nuo to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/21/23 06:00:56.02[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/21/23 06:10:56.021[0m
[1mSTEP:[0m Dumping logs from the "md-scale-1n4nuo" workload cluster [38;5;243m@ 01/21/23 06:10:56.021[0m
Failed to get logs for Machine md-scale-1n4nuo-md-0-7f449d8c5b-zjt4v, Cluster md-scale-69qktu/md-scale-1n4nuo: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-scale-1n4nuo-tkxkv, Cluster md-scale-69qktu/md-scale-1n4nuo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-69qktu" namespace [38;5;243m@ 01/21/23 06:10:58.194[0m
[1mSTEP:[0m Deleting cluster md-scale-69qktu/md-scale-1n4nuo [38;5;243m@ 01/21/23 06:10:58.477[0m
[1mSTEP:[0m Deleting cluster md-scale-1n4nuo [38;5;243m@ 01/21/23 06:10:58.499[0m
INFO: Waiting for the Cluster md-scale-69qktu/md-scale-1n4nuo to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-1n4nuo to be deleted [38;5;243m@ 01/21/23 06:10:58.514[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-scale" test spec [38;5;243m@ 01/21/23 06:11:18.529[0m
INFO: Deleting namespace md-scale-69qktu
[38;5;9m• [FAILED] [673.855 seconds][0m
[0mWhen testing MachineDeployment scale out/in [38;5;9m[1m[It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/21/23 06:10:56.021[0m
[38;5;243m------------------------------[0m
... skipping 119 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 31936 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 32058 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002343410, 0x27}, {0xc002343440, 0x22}, {0xc0025c9c10, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 21 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0001f11a0, 0x28}, {0xc0001f1260, 0x23}, {0xc002990f74, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 32069 [sync.Cond.Wait, 9 minutes][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000ef57c0, 0x3a}, {0xc000ef5800, 0x35}, {0xc001389bc0, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 30848 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0005facc0, 0x3e}, {0xc0005fad00, 0x39}, {0xc002915080, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 36 lines ...
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/21/23 06:21:44.344[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/21/23 06:21:44.523[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.816 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 5 Failures:[0m
[38;5;9m[FAIL][0m [0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger machine deployment remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mWhen testing unhealthy machines remediation [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mWhen testing node drain timeout [38;5;9m[1m[It] A node should be forcefully removed if it cannot be drained in time[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117[0m
[38;5;9m[FAIL][0m [0mWhen testing MachineDeployment scale out/in [38;5;9m[1m[It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;214m[TIMEDOUT][0m [0mCluster creation with storage policy [38;5;214m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m
[38;5;9m[1mRan 5 of 17 Specs in 3577.162 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m5 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m11 Skipped[0m
--- FAIL: TestE2E (3577.16s)
FAIL
Ginkgo ran 1 suite in 1h0m31.920628873s
Test Suite Failed
real 60m31.941s
user 5m34.569s
sys 1m6.458s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-a0031dcd36b5321a28f4ba838be53b0357492572" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-8bb26bf00dfb7a29990d5e29d8c44c347b067df1" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...