This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-18 17:16
Elapsed1h6m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 581 lines ...
  STEP: Waiting for deployment node-drain-iov9zq-unevictable-workload/unevictable-pod-94g to be available @ 01/18/23 17:32:54.87
  STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. @ 01/18/23 17:33:05.244
  INFO: Scaling controlplane node-drain-iov9zq/node-drain-0scnef from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/18/23 17:37:05.915
  STEP: Dumping logs from the "node-drain-0scnef" workload cluster @ 01/18/23 17:37:05.916
Failed to get logs for Machine node-drain-0scnef-64f7m, Cluster node-drain-iov9zq/node-drain-0scnef: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-iov9zq" namespace @ 01/18/23 17:37:08.045
  STEP: Deleting cluster node-drain-iov9zq/node-drain-0scnef @ 01/18/23 17:37:08.328
  STEP: Deleting cluster node-drain-0scnef @ 01/18/23 17:37:08.347
  INFO: Waiting for the Cluster node-drain-iov9zq/node-drain-0scnef to be deleted
  STEP: Waiting for cluster node-drain-0scnef to be deleted @ 01/18/23 17:37:08.361
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/18/23 17:37:38.382
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/18/23 17:40:29.748
  STEP: Checking all the machines controlled by quick-start-bokh69-md-0 are in the "<None>" failure domain @ 01/18/23 17:41:29.832
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/18/23 17:41:29.879
  STEP: Dumping logs from the "quick-start-bokh69" workload cluster @ 01/18/23 17:41:29.879
Failed to get logs for Machine quick-start-bokh69-6j9n6, Cluster quick-start-f8sazr/quick-start-bokh69: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-bokh69-md-0-788b5b8b4d-2f4gr, Cluster quick-start-f8sazr/quick-start-bokh69: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-f8sazr" namespace @ 01/18/23 17:41:34.653
  STEP: Deleting cluster quick-start-f8sazr/quick-start-bokh69 @ 01/18/23 17:41:34.951
  STEP: Deleting cluster quick-start-bokh69 @ 01/18/23 17:41:34.969
  INFO: Waiting for the Cluster quick-start-f8sazr/quick-start-bokh69 to be deleted
  STEP: Waiting for cluster quick-start-bokh69 to be deleted @ 01/18/23 17:41:34.984
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/18/23 17:42:05.005
... skipping 106 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 01/18/23 17:52:04.499
  INFO: Scaling machine deployment md-scale-6mkgcd/md-scale-jr7apm-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/18/23 17:52:14.633
  STEP: Dumping logs from the "md-scale-jr7apm" workload cluster @ 01/18/23 17:52:14.634
Failed to get logs for Machine md-scale-jr7apm-md-0-5bddd95fcd-n2k44, Cluster md-scale-6mkgcd/md-scale-jr7apm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-jr7apm-qg88c, Cluster md-scale-6mkgcd/md-scale-jr7apm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-6mkgcd" namespace @ 01/18/23 17:52:19.134
  STEP: Deleting cluster md-scale-6mkgcd/md-scale-jr7apm @ 01/18/23 17:52:19.495
  STEP: Deleting cluster md-scale-jr7apm @ 01/18/23 17:52:19.517
  INFO: Waiting for the Cluster md-scale-6mkgcd/md-scale-jr7apm to be deleted
  STEP: Waiting for cluster md-scale-jr7apm to be deleted @ 01/18/23 17:52:19.533
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/18/23 17:52:49.555
... skipping 159 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/18/23 18:04:23.455
  STEP: Checking all the machines controlled by quick-start-fuajhs-md-0 are in the "<None>" failure domain @ 01/18/23 18:05:13.52
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/18/23 18:05:13.568
  STEP: Dumping logs from the "quick-start-fuajhs" workload cluster @ 01/18/23 18:05:13.568
Failed to get logs for Machine quick-start-fuajhs-7plcr, Cluster quick-start-h3ozxm/quick-start-fuajhs: dialing host IP address at 192.168.6.12: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-fuajhs-md-0-5f76fcdd66-7ldrf, Cluster quick-start-h3ozxm/quick-start-fuajhs: dialing host IP address at 192.168.6.77: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  STEP: Dumping all the Cluster API resources in the "quick-start-h3ozxm" namespace @ 01/18/23 18:05:16.233
  STEP: Deleting cluster quick-start-h3ozxm/quick-start-fuajhs @ 01/18/23 18:05:16.582
  STEP: Deleting cluster quick-start-fuajhs @ 01/18/23 18:05:16.602
  INFO: Waiting for the Cluster quick-start-h3ozxm/quick-start-fuajhs to be deleted
  STEP: Waiting for cluster quick-start-fuajhs to be deleted @ 01/18/23 18:05:16.616
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/18/23 18:05:46.637
... skipping 57 lines ...
  INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
  STEP: Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted @ 01/18/23 18:10:18.525
  INFO: Removing MachineDeploymentTopology from the Cluster Topology.
  INFO: Waiting for MachineDeployment to be deleted.
  STEP: PASSED! @ 01/18/23 18:10:28.605
  STEP: Dumping logs from the "clusterclass-changes-nxnr7r" workload cluster @ 01/18/23 18:10:28.605
Failed to get logs for Machine clusterclass-changes-nxnr7r-qbz2q-pt4xv, Cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-er9fl6" namespace @ 01/18/23 18:10:30.832
  STEP: Deleting cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r @ 01/18/23 18:10:31.213
  STEP: Deleting cluster clusterclass-changes-nxnr7r @ 01/18/23 18:10:31.237
  INFO: Waiting for the Cluster clusterclass-changes-er9fl6/clusterclass-changes-nxnr7r to be deleted
  STEP: Waiting for cluster clusterclass-changes-nxnr7r to be deleted @ 01/18/23 18:10:31.249
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/18/23 18:10:51.264
... skipping 36 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/18/23 18:10:52.502
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-sonjex/storage-policy-2fkf0o to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/18/23 18:11:12.543
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/18/23 18:21:12.543
  STEP: Dumping all the Cluster API resources in the "capv-e2e-sonjex" namespace @ 01/18/23 18:21:12.544
  STEP: cleaning up namespace: capv-e2e-sonjex @ 01/18/23 18:21:12.839
  STEP: Deleting cluster storage-policy-2fkf0o @ 01/18/23 18:21:12.858
  INFO: Waiting for the Cluster capv-e2e-sonjex/storage-policy-2fkf0o to be deleted
  STEP: Waiting for cluster storage-policy-2fkf0o to be deleted @ 01/18/23 18:21:12.873
  STEP: Deleting namespace used for hosting test spec @ 01/18/23 18:21:32.889
  INFO: Deleting namespace capv-e2e-sonjex
• [FAILED] [641.625 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/18/23 18:21:12.543
------------------------------
... skipping 34 lines ...
  STEP: Waiting for cluster to enter the provisioned phase @ 01/18/23 18:21:34.253
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-casdkf/mhc-remediation-necpul to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/18/23 18:22:24.302
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83 @ 01/18/23 18:22:29.258
  STEP: Dumping logs from the "mhc-remediation-necpul" workload cluster @ 01/18/23 18:22:29.259
Failed to get logs for Machine mhc-remediation-necpul-md-0-5c64cd5495-swtgt, Cluster mhc-remediation-casdkf/mhc-remediation-necpul: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-necpul-xdbbq, Cluster mhc-remediation-casdkf/mhc-remediation-necpul: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-casdkf" namespace @ 01/18/23 18:22:29.331
  STEP: Deleting cluster mhc-remediation-casdkf/mhc-remediation-necpul @ 01/18/23 18:22:29.697
  STEP: Deleting cluster mhc-remediation-necpul @ 01/18/23 18:22:29.722
  INFO: Waiting for the Cluster mhc-remediation-casdkf/mhc-remediation-necpul to be deleted
  STEP: Waiting for cluster mhc-remediation-necpul to be deleted @ 01/18/23 18:22:29.731
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/18/23 18:22:49.745
... skipping 84 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001eb1940, 0x3a}, {0xc001eb1980, 0x35}, {0xc000eef2a0, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 31701 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0025340c0, 0x28}, {0xc0025340f0, 0x23}, {0xc00252e1b4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00156a400, 0x3e}, {0xc00156a440, 0x39}, {0xc0024be570, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 29 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 31697 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 31717 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 31694 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0024bfe90, 0x27}, {0xc0024bfec0, 0x22}, {0xc0025984b0, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 11 lines ...
  STEP: Cleaning up the vSphere session @ 01/18/23 18:22:49.764
  STEP: Tearing down the management cluster @ 01/18/23 18:22:49.9
[SynchronizedAfterSuite] PASSED [1.436 seconds]
------------------------------

Summarizing 2 Failures:
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] When testing unhealthy machines remediation [It] Should successfully trigger machine deployment remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83

Ran 10 of 17 Specs in 3566.975 seconds
FAIL! - Suite Timeout Elapsed -- 8 Passed | 2 Failed | 1 Pending | 6 Skipped
--- FAIL: TestE2E (3566.98s)
FAIL

Ginkgo ran 1 suite in 1h0m22.031737072s

Test Suite Failed

real	60m22.053s
user	5m46.586s
sys	1m9.469s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-552a9076e20bc61a884e19f581887d1383ea0077" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-ecaa6a6bcdb0c242678e1430594492efd7f89ad1" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...