This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 581 lines ...
[1mSTEP:[0m Waiting for deployment node-drain-iov9zq-unevictable-workload/unevictable-pod-94g to be available [38;5;243m@ 01/18/23 17:32:54.87[0m
[1mSTEP:[0m Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. [38;5;243m@ 01/18/23 17:33:05.244[0m
INFO: Scaling controlplane node-drain-iov9zq/node-drain-0scnef from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 01/18/23 17:37:05.915[0m
[1mSTEP:[0m Dumping logs from the "node-drain-0scnef" workload cluster [38;5;243m@ 01/18/23 17:37:05.916[0m
Failed to get logs for Machine node-drain-0scnef-64f7m, Cluster node-drain-iov9zq/node-drain-0scnef: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-iov9zq" namespace [38;5;243m@ 01/18/23 17:37:08.045[0m
[1mSTEP:[0m Deleting cluster node-drain-iov9zq/node-drain-0scnef [38;5;243m@ 01/18/23 17:37:08.328[0m
[1mSTEP:[0m Deleting cluster node-drain-0scnef [38;5;243m@ 01/18/23 17:37:08.347[0m
INFO: Waiting for the Cluster node-drain-iov9zq/node-drain-0scnef to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-0scnef to be deleted [38;5;243m@ 01/18/23 17:37:08.361[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/18/23 17:37:38.382[0m
... skipping 44 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/18/23 17:40:29.748[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-bokh69-md-0 are in the "<None>" failure domain [38;5;243m@ 01/18/23 17:41:29.832[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 01/18/23 17:41:29.879[0m
[1mSTEP:[0m Dumping logs from the "quick-start-bokh69" workload cluster [38;5;243m@ 01/18/23 17:41:29.879[0m
Failed to get logs for Machine quick-start-bokh69-6j9n6, Cluster quick-start-f8sazr/quick-start-bokh69: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-bokh69-md-0-788b5b8b4d-2f4gr, Cluster quick-start-f8sazr/quick-start-bokh69: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-f8sazr" namespace [38;5;243m@ 01/18/23 17:41:34.653[0m
[1mSTEP:[0m Deleting cluster quick-start-f8sazr/quick-start-bokh69 [38;5;243m@ 01/18/23 17:41:34.951[0m
[1mSTEP:[0m Deleting cluster quick-start-bokh69 [38;5;243m@ 01/18/23 17:41:34.969[0m
INFO: Waiting for the Cluster quick-start-f8sazr/quick-start-bokh69 to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-bokh69 to be deleted [38;5;243m@ 01/18/23 17:41:34.984[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/18/23 17:42:05.005[0m
... skipping 106 lines ...
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m Scaling the MachineDeployment down to 1 [38;5;243m@ 01/18/23 17:52:04.499[0m
INFO: Scaling machine deployment md-scale-6mkgcd/md-scale-jr7apm-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 01/18/23 17:52:14.633[0m
[1mSTEP:[0m Dumping logs from the "md-scale-jr7apm" workload cluster [38;5;243m@ 01/18/23 17:52:14.634[0m
Failed to get logs for Machine md-scale-jr7apm-md-0-5bddd95fcd-n2k44, Cluster md-scale-6mkgcd/md-scale-jr7apm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-jr7apm-qg88c, Cluster md-scale-6mkgcd/md-scale-jr7apm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-6mkgcd" namespace [38;5;243m@ 01/18/23 17:52:19.134[0m
[1mSTEP:[0m Deleting cluster md-scale-6mkgcd/md-scale-jr7apm [38;5;243m@ 01/18/23 17:52:19.495[0m
[1mSTEP:[0m Deleting cluster md-scale-jr7apm [38;5;243m@ 01/18/23 17:52:19.517[0m
INFO: Waiting for the Cluster md-scale-6mkgcd/md-scale-jr7apm to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-jr7apm to be deleted [38;5;243m@ 01/18/23 17:52:19.533[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-scale" test spec [38;5;243m@ 01/18/23 17:52:49.555[0m
... skipping 159 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/18/23 18:04:23.455[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-fuajhs-md-0 are in the "<None>" failure domain [38;5;243m@ 01/18/23 18:05:13.52[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 01/18/23 18:05:13.568[0m
[1mSTEP:[0m Dumping logs from the "quick-start-fuajhs" workload cluster [38;5;243m@ 01/18/23 18:05:13.568[0m
Failed to get logs for Machine quick-start-fuajhs-7plcr, Cluster quick-start-h3ozxm/quick-start-fuajhs: dialing host IP address at 192.168.6.12: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-fuajhs-md-0-5f76fcdd66-7ldrf, Cluster quick-start-h3ozxm/quick-start-fuajhs: dialing host IP address at 192.168.6.77: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-h3ozxm" namespace [38;5;243m@ 01/18/23 18:05:16.233[0m
[1mSTEP:[0m Deleting cluster quick-start-h3ozxm/quick-start-fuajhs [38;5;243m@ 01/18/23 18:05:16.582[0m
[1mSTEP:[0m Deleting cluster quick-start-fuajhs [38;5;243m@ 01/18/23 18:05:16.602[0m
INFO: Waiting for the Cluster quick-start-h3ozxm/quick-start-fuajhs to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-fuajhs to be deleted [38;5;243m@ 01/18/23 18:05:16.616[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/18/23 18:05:46.637[0m
... skipping 57 lines ...
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
[1mSTEP:[0m Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted [38;5;243m@ 01/18/23 18:10:18.525[0m
INFO: Removing MachineDeploymentTopology from the Cluster Topology.
INFO: Waiting for MachineDeployment to be deleted.
[1mSTEP:[0m PASSED! [38;5;243m@ 01/18/23 18:10:28.605[0m
[1mSTEP:[0m Dumping logs from the "clusterclass-changes-nxnr7r" workload cluster [38;5;243m@ 01/18/23 18:10:28.605[0m
Failed to get logs for Machine clusterclass-changes-nxnr7r-qbz2q-pt4xv, Cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "clusterclass-changes-er9fl6" namespace [38;5;243m@ 01/18/23 18:10:30.832[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r [38;5;243m@ 01/18/23 18:10:31.213[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-nxnr7r [38;5;243m@ 01/18/23 18:10:31.237[0m
INFO: Waiting for the Cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r to be deleted
[1mSTEP:[0m Waiting for cluster clusterclass-changes-nxnr7r to be deleted [38;5;243m@ 01/18/23 18:10:31.249[0m
[1mSTEP:[0m Deleting namespace used for hosting the "clusterclass-changes" test spec [38;5;243m@ 01/18/23 18:10:51.264[0m
... skipping 36 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/18/23 18:10:52.502[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capv-e2e-sonjex/storage-policy-2fkf0o to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/18/23 18:11:12.543[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/18/23 18:21:12.543[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "capv-e2e-sonjex" namespace [38;5;243m@ 01/18/23 18:21:12.544[0m
[1mSTEP:[0m cleaning up namespace: capv-e2e-sonjex [38;5;243m@ 01/18/23 18:21:12.839[0m
[1mSTEP:[0m Deleting cluster storage-policy-2fkf0o [38;5;243m@ 01/18/23 18:21:12.858[0m
INFO: Waiting for the Cluster capv-e2e-sonjex/storage-policy-2fkf0o to be deleted
[1mSTEP:[0m Waiting for cluster storage-policy-2fkf0o to be deleted [38;5;243m@ 01/18/23 18:21:12.873[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/18/23 18:21:32.889[0m
INFO: Deleting namespace capv-e2e-sonjex
[38;5;9m• [FAILED] [641.625 seconds][0m
[0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/18/23 18:21:12.543[0m
[38;5;243m------------------------------[0m
... skipping 34 lines ...
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/18/23 18:21:34.253[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by mhc-remediation-casdkf/mhc-remediation-necpul to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/18/23 18:22:24.302[0m
[38;5;214m[TIMEDOUT][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83 [38;5;243m@ 01/18/23 18:22:29.258[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-necpul" workload cluster [38;5;243m@ 01/18/23 18:22:29.259[0m
Failed to get logs for Machine mhc-remediation-necpul-md-0-5c64cd5495-swtgt, Cluster mhc-remediation-casdkf/mhc-remediation-necpul: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-necpul-xdbbq, Cluster mhc-remediation-casdkf/mhc-remediation-necpul: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-casdkf" namespace [38;5;243m@ 01/18/23 18:22:29.331[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-casdkf/mhc-remediation-necpul [38;5;243m@ 01/18/23 18:22:29.697[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-necpul [38;5;243m@ 01/18/23 18:22:29.722[0m
INFO: Waiting for the Cluster mhc-remediation-casdkf/mhc-remediation-necpul to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-necpul to be deleted [38;5;243m@ 01/18/23 18:22:29.731[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/18/23 18:22:49.745[0m
... skipping 84 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001eb1940, 0x3a}, {0xc001eb1980, 0x35}, {0xc000eef2a0, 0x1d}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 6 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 31701 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0025340c0, 0x28}, {0xc0025340f0, 0x23}, {0xc00252e1b4, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 21 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00156a400, 0x3e}, {0xc00156a440, 0x39}, {0xc0024be570, 0x21}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 29 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 31697 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 31717 [select][0m
... skipping 3 lines ...
| for {
[1m[38;5;214m> select {[0m
| case <-ctx.Done():
| return
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225[0m
| }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
|
[1m[38;5;214m> go func() {[0m
| defer GinkgoRecover()
| for {
[38;5;214mgoroutine 31694 [sync.Cond.Wait][0m
... skipping 18 lines ...
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0024bfe90, 0x27}, {0xc0024bfec0, 0x22}, {0xc0025984b0, 0xb}, ...}, ...}, ...)[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186[0m
| out := bufio.NewWriter(f)
| defer out.Flush()
[1m[38;5;214m> _, err = out.ReadFrom(podLogs)[0m
| if err != nil && err != io.ErrUnexpectedEOF {
| // Failing to stream logs should not cause the test to fail
[38;5;214m[1m> sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs[0m
[38;5;214m[1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161[0m
|
| // Watch each container's logs in a goroutine so we can stream them all concurrently.
[1m[38;5;214m> go func(pod corev1.Pod, container corev1.Container) {[0m
| defer GinkgoRecover()
... skipping 11 lines ...
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 01/18/23 18:22:49.764[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/18/23 18:22:49.9[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.436 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 2 Failures:[0m
[38;5;9m[FAIL][0m [0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;214m[TIMEDOUT][0m [0mWhen testing unhealthy machines remediation [38;5;214m[1m[It] Should successfully trigger machine deployment remediation[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83[0m
[38;5;9m[1mRan 10 of 17 Specs in 3566.975 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m8 Passed[0m | [38;5;9m[1m2 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m6 Skipped[0m
--- FAIL: TestE2E (3566.98s)
FAIL
Ginkgo ran 1 suite in 1h0m22.031737072s
Test Suite Failed
real 60m22.053s
user 5m46.586s
sys 1m9.469s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-552a9076e20bc61a884e19f581887d1383ea0077" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-ecaa6a6bcdb0c242678e1430594492efd7f89ad1" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...