This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-23 17:19
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 572 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 17:26:42.925
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-j9adhg/quick-start-cl8usm-xnd8t to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 17:27:12.995
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:37:12.998
  STEP: Dumping logs from the "quick-start-cl8usm" workload cluster @ 01/23/23 17:37:12.999
Failed to get logs for Machine quick-start-cl8usm-md-0-vdmlh-949ff58fb-jfqwd, Cluster quick-start-j9adhg/quick-start-cl8usm: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-cl8usm-xnd8t-wxhxg, Cluster quick-start-j9adhg/quick-start-cl8usm: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-j9adhg" namespace @ 01/23/23 17:37:15.413
  STEP: Deleting cluster quick-start-j9adhg/quick-start-cl8usm @ 01/23/23 17:37:16.018
  STEP: Deleting cluster quick-start-cl8usm @ 01/23/23 17:37:16.06
  INFO: Waiting for the Cluster quick-start-j9adhg/quick-start-cl8usm to be deleted
  STEP: Waiting for cluster quick-start-cl8usm to be deleted @ 01/23/23 17:37:16.087
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/23/23 17:37:36.113
  INFO: Deleting namespace quick-start-j9adhg
• [FAILED] [655.996 seconds]
ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.002s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:37:12.998
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 17:37:38.102
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-60tp7p/quick-start-9p9hon to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 17:37:58.175
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:47:58.176
  STEP: Dumping logs from the "quick-start-9p9hon" workload cluster @ 01/23/23 17:47:58.176
Failed to get logs for Machine quick-start-9p9hon-md-0-fc97bd7b4-mgp6l, Cluster quick-start-60tp7p/quick-start-9p9hon: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-9p9hon-nn75p, Cluster quick-start-60tp7p/quick-start-9p9hon: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-60tp7p" namespace @ 01/23/23 17:48:00.49
  STEP: Deleting cluster quick-start-60tp7p/quick-start-9p9hon @ 01/23/23 17:48:00.807
  STEP: Deleting cluster quick-start-9p9hon @ 01/23/23 17:48:00.827
  INFO: Waiting for the Cluster quick-start-60tp7p/quick-start-9p9hon to be deleted
  STEP: Waiting for cluster quick-start-9p9hon to be deleted @ 01/23/23 17:48:00.843
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/23/23 17:48:20.859
  INFO: Deleting namespace quick-start-60tp7p
• [FAILED] [644.738 seconds]
Cluster Creation using Cluster API quick-start test [PR-Blocking] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:47:58.176
------------------------------
... skipping 32 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 17:48:30.16
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by hw-upgrade-e2e-5pmlt6/hw-upgrade-60y60e to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 17:49:20.215
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:59:20.216
  STEP: Dumping all the Cluster API resources in the "hw-upgrade-e2e-5pmlt6" namespace @ 01/23/23 17:59:20.216
  STEP: cleaning up namespace: hw-upgrade-e2e-5pmlt6 @ 01/23/23 17:59:20.56
  STEP: Deleting cluster hw-upgrade-60y60e @ 01/23/23 17:59:20.585
  INFO: Waiting for the Cluster hw-upgrade-e2e-5pmlt6/hw-upgrade-60y60e to be deleted
  STEP: Waiting for cluster hw-upgrade-60y60e to be deleted @ 01/23/23 17:59:20.601
  STEP: Deleting namespace used for hosting test spec @ 01/23/23 17:59:40.617
  INFO: Deleting namespace hw-upgrade-e2e-5pmlt6
• [FAILED] [679.756 seconds]
Hardware version upgrade [It] creates a cluster with VM hardware versions upgraded
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57

  [FAILED] Timed out after 600.001s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 17:59:20.216
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 17:59:42.092
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-yjxt1x/storage-policy-gf6w1u to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 18:00:02.163
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 18:10:02.166
  STEP: Dumping all the Cluster API resources in the "capv-e2e-yjxt1x" namespace @ 01/23/23 18:10:02.166
  STEP: cleaning up namespace: capv-e2e-yjxt1x @ 01/23/23 18:10:02.445
  STEP: Deleting cluster storage-policy-gf6w1u @ 01/23/23 18:10:02.461
  INFO: Waiting for the Cluster capv-e2e-yjxt1x/storage-policy-gf6w1u to be deleted
  STEP: Waiting for cluster storage-policy-gf6w1u to be deleted @ 01/23/23 18:10:02.475
  STEP: Deleting namespace used for hosting test spec @ 01/23/23 18:10:22.49
  INFO: Deleting namespace capv-e2e-yjxt1x
• [FAILED] [641.867 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.002s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 18:10:02.166
------------------------------
... skipping 36 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 18:10:23.673
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-w06wbr/quick-start-kshjdz to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 18:11:13.737
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 18:21:13.906
  STEP: Dumping logs from the "quick-start-kshjdz" workload cluster @ 01/23/23 18:21:13.906
Failed to get logs for Machine quick-start-kshjdz-lppkn, Cluster quick-start-w06wbr/quick-start-kshjdz: dialing host IP address at 192.168.6.127: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-kshjdz-md-0-b9868b7bc-9x4dp, Cluster quick-start-w06wbr/quick-start-kshjdz: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "quick-start-w06wbr" namespace @ 01/23/23 18:21:15.244
  STEP: Deleting cluster quick-start-w06wbr/quick-start-kshjdz @ 01/23/23 18:21:15.533
  STEP: Deleting cluster quick-start-kshjdz @ 01/23/23 18:21:15.552
  INFO: Waiting for the Cluster quick-start-w06wbr/quick-start-kshjdz to be deleted
  STEP: Waiting for cluster quick-start-kshjdz to be deleted @ 01/23/23 18:21:15.567
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/23/23 18:21:35.58
  INFO: Deleting namespace quick-start-w06wbr
• [FAILED] [672.932 seconds]
Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.003s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/23/23 18:21:13.906
------------------------------
... skipping 33 lines ...
  STEP: Waiting for cluster to enter the provisioned phase @ 01/23/23 18:21:36.63
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by node-drain-rvlttx/node-drain-3wn96g to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/23/23 18:22:26.678
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83 @ 01/23/23 18:23:50.195
  STEP: Dumping logs from the "node-drain-3wn96g" workload cluster @ 01/23/23 18:23:50.196
Failed to get logs for Machine node-drain-3wn96g-bqsd9, Cluster node-drain-rvlttx/node-drain-3wn96g: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine node-drain-3wn96g-md-0-79788b5b55-h6rlz, Cluster node-drain-rvlttx/node-drain-3wn96g: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "node-drain-rvlttx" namespace @ 01/23/23 18:23:50.261
  STEP: Deleting cluster node-drain-rvlttx/node-drain-3wn96g @ 01/23/23 18:23:50.583
  STEP: Deleting cluster node-drain-3wn96g @ 01/23/23 18:23:50.601
  INFO: Waiting for the Cluster node-drain-rvlttx/node-drain-3wn96g to be deleted
  STEP: Waiting for cluster node-drain-3wn96g to be deleted @ 01/23/23 18:23:50.61
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/23/23 18:24:10.624
... skipping 92 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 23950 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000c17a40, 0x3a}, {0xc000c17a80, 0x35}, {0xc000d23bc0, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002487860, 0x28}, {0xc002487890, 0x23}, {0xc0026675b4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 24238 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 24232 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001085b80, 0x3e}, {0xc001085bc0, 0x39}, {0xc00220f8c0, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0025b8e70, 0x27}, {0xc0025b8ea0, 0x22}, {0xc0025b0a04, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {
------------------------------
SSSSSSSSS
... skipping 3 lines ...
  STEP: Cleaning up the vSphere session @ 01/23/23 18:24:10.647
  STEP: Tearing down the management cluster @ 01/23/23 18:24:10.895
[SynchronizedAfterSuite] PASSED [1.490 seconds]
------------------------------

Summarizing 6 Failures:
  [FAIL] ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster Creation using Cluster API quick-start test [PR-Blocking] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Hardware version upgrade [It] creates a cluster with VM hardware versions upgraded
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83

Ran 6 of 17 Specs in 3563.033 seconds
FAIL! - Suite Timeout Elapsed -- 0 Passed | 6 Failed | 1 Pending | 10 Skipped
--- FAIL: TestE2E (3563.03s)
FAIL

Ginkgo ran 1 suite in 1h0m22.039565587s

Test Suite Failed

real	60m22.234s
user	5m45.230s
sys	1m8.791s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-8d120dfe2596a7986202c1849d32f4a4bcf24e54" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-7148687b11cefc6dfbcae7beae16c13a626e93ac" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...