This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-21 05:17
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 547 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/21/23 05:24:11.618
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-tfseq0/mhc-remediation-czf156 to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/21/23 05:24:41.66
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 05:34:41.661
  STEP: Dumping logs from the "mhc-remediation-czf156" workload cluster @ 01/21/23 05:34:41.661
Failed to get logs for Machine mhc-remediation-czf156-md-0-74b6d6fc99-hzvqt, Cluster mhc-remediation-tfseq0/mhc-remediation-czf156: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-czf156-p6ltw, Cluster mhc-remediation-tfseq0/mhc-remediation-czf156: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-tfseq0" namespace @ 01/21/23 05:34:43.812
  STEP: Deleting cluster mhc-remediation-tfseq0/mhc-remediation-czf156 @ 01/21/23 05:34:44.1
  STEP: Deleting cluster mhc-remediation-czf156 @ 01/21/23 05:34:44.119
  INFO: Waiting for the Cluster mhc-remediation-tfseq0/mhc-remediation-czf156 to be deleted
  STEP: Waiting for cluster mhc-remediation-czf156 to be deleted @ 01/21/23 05:34:44.134
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/21/23 05:35:04.148
  INFO: Deleting namespace mhc-remediation-tfseq0
• [FAILED] [655.160 seconds]
When testing unhealthy machines remediation [It] Should successfully trigger machine deployment remediation
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83

  [FAILED] Timed out after 600.001s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 05:34:41.661
------------------------------
... skipping 32 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/21/23 05:35:05.358
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-ef5t81/mhc-remediation-hz506m to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/21/23 05:35:55.412
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 05:45:55.414
  STEP: Dumping logs from the "mhc-remediation-hz506m" workload cluster @ 01/21/23 05:45:55.414
Failed to get logs for Machine mhc-remediation-hz506m-md-0-767585bcf9-7vbnj, Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-hz506m-vnr5f, Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-ef5t81" namespace @ 01/21/23 05:45:57.721
  STEP: Deleting cluster mhc-remediation-ef5t81/mhc-remediation-hz506m @ 01/21/23 05:45:58.046
  STEP: Deleting cluster mhc-remediation-hz506m @ 01/21/23 05:45:58.066
  INFO: Waiting for the Cluster mhc-remediation-ef5t81/mhc-remediation-hz506m to be deleted
  STEP: Waiting for cluster mhc-remediation-hz506m to be deleted @ 01/21/23 05:45:58.079
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/21/23 05:46:18.095
  INFO: Deleting namespace mhc-remediation-ef5t81
• [FAILED] [673.945 seconds]
When testing unhealthy machines remediation [It] Should successfully trigger KCP remediation
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116

  [FAILED] Timed out after 600.001s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 05:45:55.414
------------------------------
... skipping 34 lines ...
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by node-drain-usrxj1/node-drain-s6zk0x to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/21/23 05:47:09.232
  INFO: Waiting for control plane to be ready
  INFO: Waiting for the remaining control plane machines managed by node-drain-usrxj1/node-drain-s6zk0x to be provisioned
  STEP: Waiting for all control plane nodes to exist @ 01/21/23 05:49:09.323
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117 @ 01/21/23 05:59:09.324
  STEP: Dumping logs from the "node-drain-s6zk0x" workload cluster @ 01/21/23 05:59:09.324
Failed to get logs for Machine node-drain-s6zk0x-gd49j, Cluster node-drain-usrxj1/node-drain-s6zk0x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-s6zk0x-md-0-7d556fdb9c-h6rqq, Cluster node-drain-usrxj1/node-drain-s6zk0x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-usrxj1" namespace @ 01/21/23 05:59:14.291
  STEP: Deleting cluster node-drain-usrxj1/node-drain-s6zk0x @ 01/21/23 05:59:14.604
  STEP: Deleting cluster node-drain-s6zk0x @ 01/21/23 05:59:14.627
  INFO: Waiting for the Cluster node-drain-usrxj1/node-drain-s6zk0x to be deleted
  STEP: Waiting for cluster node-drain-s6zk0x to be deleted @ 01/21/23 05:59:14.644
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/21/23 06:00:04.674
  INFO: Deleting namespace node-drain-usrxj1
• [FAILED] [826.581 seconds]
When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83

  [FAILED] Timed out after 600.000s.
  Timed out waiting for 3 control plane machines to exist
  Expected
      <int>: 1
  to equal
      <int>: 3
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117 @ 01/21/23 05:59:09.324
... skipping 32 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/21/23 06:00:05.958
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by md-scale-69qktu/md-scale-1n4nuo to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/21/23 06:00:56.02
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 06:10:56.021
  STEP: Dumping logs from the "md-scale-1n4nuo" workload cluster @ 01/21/23 06:10:56.021
Failed to get logs for Machine md-scale-1n4nuo-md-0-7f449d8c5b-zjt4v, Cluster md-scale-69qktu/md-scale-1n4nuo: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-scale-1n4nuo-tkxkv, Cluster md-scale-69qktu/md-scale-1n4nuo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-69qktu" namespace @ 01/21/23 06:10:58.194
  STEP: Deleting cluster md-scale-69qktu/md-scale-1n4nuo @ 01/21/23 06:10:58.477
  STEP: Deleting cluster md-scale-1n4nuo @ 01/21/23 06:10:58.499
  INFO: Waiting for the Cluster md-scale-69qktu/md-scale-1n4nuo to be deleted
  STEP: Waiting for cluster md-scale-1n4nuo to be deleted @ 01/21/23 06:10:58.514
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/21/23 06:11:18.529
  INFO: Deleting namespace md-scale-69qktu
• [FAILED] [673.855 seconds]
When testing MachineDeployment scale out/in [It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/21/23 06:10:56.021
------------------------------
... skipping 119 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 31936 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32058 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002343410, 0x27}, {0xc002343440, 0x22}, {0xc0025c9c10, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0001f11a0, 0x28}, {0xc0001f1260, 0x23}, {0xc002990f74, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32069 [sync.Cond.Wait, 9 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000ef57c0, 0x3a}, {0xc000ef5800, 0x35}, {0xc001389bc0, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 30848 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0005facc0, 0x3e}, {0xc0005fad00, 0x39}, {0xc002915080, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 36 lines ...
  STEP: Tearing down the management cluster @ 01/21/23 06:21:44.344
  STEP: Deleting namespace used for hosting test spec @ 01/21/23 06:21:44.523
[SynchronizedAfterSuite] PASSED [1.816 seconds]
------------------------------

Summarizing 5 Failures:
  [FAIL] When testing unhealthy machines remediation [It] Should successfully trigger machine deployment remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] When testing unhealthy machines remediation [It] Should successfully trigger KCP remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:117
  [FAIL] When testing MachineDeployment scale out/in [It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

Ran 5 of 17 Specs in 3577.162 seconds
FAIL! - Suite Timeout Elapsed -- 0 Passed | 5 Failed | 1 Pending | 11 Skipped
--- FAIL: TestE2E (3577.16s)
FAIL

Ginkgo ran 1 suite in 1h0m31.920628873s

Test Suite Failed

real	60m31.941s
user	5m34.569s
sys	1m6.458s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-a0031dcd36b5321a28f4ba838be53b0357492572" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-8bb26bf00dfb7a29990d5e29d8c44c347b067df1" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...