This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-22 05:17
Elapsed1h4m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 568 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/22/23 05:25:11.677
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-bw75xv/quick-start-pjqv8d to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/22/23 05:25:41.75
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 05:35:41.751
  STEP: Dumping logs from the "quick-start-pjqv8d" workload cluster @ 01/22/23 05:35:41.751
Failed to get logs for Machine quick-start-pjqv8d-f5gcf, Cluster quick-start-bw75xv/quick-start-pjqv8d: dialing host IP address at 192.168.6.117: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-pjqv8d-md-0-66996d6c48-wfkcx, Cluster quick-start-bw75xv/quick-start-pjqv8d: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "quick-start-bw75xv" namespace @ 01/22/23 05:35:43.029
  STEP: Deleting cluster quick-start-bw75xv/quick-start-pjqv8d @ 01/22/23 05:35:43.323
  STEP: Deleting cluster quick-start-pjqv8d @ 01/22/23 05:35:43.35
  INFO: Waiting for the Cluster quick-start-bw75xv/quick-start-pjqv8d to be deleted
  STEP: Waiting for cluster quick-start-pjqv8d to be deleted @ 01/22/23 05:35:43.397
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/22/23 05:36:03.413
  INFO: Deleting namespace quick-start-bw75xv
• [FAILED] [656.458 seconds]
Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 05:35:41.751
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/22/23 05:36:04.762
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-096q5e/storage-policy-sozugs to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/22/23 05:36:24.794
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 05:46:24.798
  STEP: Dumping all the Cluster API resources in the "capv-e2e-096q5e" namespace @ 01/22/23 05:46:24.798
  STEP: cleaning up namespace: capv-e2e-096q5e @ 01/22/23 05:46:25.089
  STEP: Deleting cluster storage-policy-sozugs @ 01/22/23 05:46:25.107
  INFO: Waiting for the Cluster capv-e2e-096q5e/storage-policy-sozugs to be deleted
  STEP: Waiting for cluster storage-policy-sozugs to be deleted @ 01/22/23 05:46:25.12
  STEP: Deleting namespace used for hosting test spec @ 01/22/23 05:46:45.135
  INFO: Deleting namespace capv-e2e-096q5e
• [FAILED] [641.719 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.003s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 05:46:24.798
------------------------------
... skipping 45 lines ...
  STEP: Upgrading MachineDeployment Infrastructure ref and wait for rolling upgrade @ 01/22/23 05:51:46.715
  INFO: Patching the new infrastructure ref to Machine Deployment md-rollout-ultgm2/md-rollout-oiblun-md-0
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:294 @ 01/22/23 06:07:46.786
  STEP: Dumping logs from the "md-rollout-oiblun" workload cluster @ 01/22/23 06:07:46.786
Failed to get logs for Machine md-rollout-oiblun-md-0-6564b9fc-97x9d, Cluster md-rollout-ultgm2/md-rollout-oiblun: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-oiblun-md-0-7b7c5bc455-mxqxl, Cluster md-rollout-ultgm2/md-rollout-oiblun: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-oiblun-w7cqs, Cluster md-rollout-ultgm2/md-rollout-oiblun: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-ultgm2" namespace @ 01/22/23 06:07:54.046
  STEP: Deleting cluster md-rollout-ultgm2/md-rollout-oiblun @ 01/22/23 06:07:54.345
  STEP: Deleting cluster md-rollout-oiblun @ 01/22/23 06:07:54.364
  INFO: Waiting for the Cluster md-rollout-ultgm2/md-rollout-oiblun to be deleted
  STEP: Waiting for cluster md-rollout-oiblun to be deleted @ 01/22/23 06:07:54.377
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/22/23 06:08:14.395
  INFO: Deleting namespace md-rollout-ultgm2
• [FAILED] [1289.259 seconds]
ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71

  [FAILED] Timed out after 900.000s.
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:294 @ 01/22/23 06:07:46.786
------------------------------
When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/22/23 06:08:15.671
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-pp7otj/mhc-remediation-hsyjmq to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/22/23 06:09:05.723
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 06:19:05.723
  STEP: Dumping logs from the "mhc-remediation-hsyjmq" workload cluster @ 01/22/23 06:19:05.723
Failed to get logs for Machine mhc-remediation-hsyjmq-kkvf9, Cluster mhc-remediation-pp7otj/mhc-remediation-hsyjmq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-hsyjmq-md-0-75c85b467b-47b8b, Cluster mhc-remediation-pp7otj/mhc-remediation-hsyjmq: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-pp7otj" namespace @ 01/22/23 06:19:07.917
  STEP: Deleting cluster mhc-remediation-pp7otj/mhc-remediation-hsyjmq @ 01/22/23 06:19:08.19
  STEP: Deleting cluster mhc-remediation-hsyjmq @ 01/22/23 06:19:08.21
  INFO: Waiting for the Cluster mhc-remediation-pp7otj/mhc-remediation-hsyjmq to be deleted
  STEP: Waiting for cluster mhc-remediation-hsyjmq to be deleted @ 01/22/23 06:19:08.227
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/22/23 06:19:28.243
  INFO: Deleting namespace mhc-remediation-pp7otj
• [FAILED] [673.843 seconds]
When testing unhealthy machines remediation [It] Should successfully trigger machine deployment remediation
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:83

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/22/23 06:19:05.723
------------------------------
... skipping 34 lines ...
  STEP: Waiting for cluster to enter the provisioned phase @ 01/22/23 06:19:29.582
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by mhc-remediation-k6saxq/mhc-remediation-12mghd to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/22/23 06:20:19.633
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116 @ 01/22/23 06:21:57.878
  STEP: Dumping logs from the "mhc-remediation-12mghd" workload cluster @ 01/22/23 06:21:57.88
Failed to get logs for Machine mhc-remediation-12mghd-md-0-58bf596478-lqsqf, Cluster mhc-remediation-k6saxq/mhc-remediation-12mghd: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine mhc-remediation-12mghd-sftvn, Cluster mhc-remediation-k6saxq/mhc-remediation-12mghd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-k6saxq" namespace @ 01/22/23 06:21:59.791
  STEP: Deleting cluster mhc-remediation-k6saxq/mhc-remediation-12mghd @ 01/22/23 06:22:00.083
  STEP: Deleting cluster mhc-remediation-12mghd @ 01/22/23 06:22:00.099
  INFO: Waiting for the Cluster mhc-remediation-k6saxq/mhc-remediation-12mghd to be deleted
  STEP: Waiting for cluster mhc-remediation-12mghd to be deleted @ 01/22/23 06:22:00.111
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/22/23 06:22:20.126
... skipping 69 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 45898 [chan receive, 2 minutes]
... skipping 41 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00265c810, 0x27}, {0xc00265c840, 0x22}, {0xc0011bc1a4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00222dce0, 0x28}, {0xc00222dd10, 0x23}, {0xc0020e5184, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32835 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32747 [sync.Cond.Wait, 2 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001516600, 0x3a}, {0xc001516640, 0x35}, {0xc000fa68a0, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000a51440, 0x3e}, {0xc000a51480, 0x39}, {0xc000fd9260, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {
------------------------------
SSSSSSS
... skipping 8 lines ...
  STEP: Cleaning up the vSphere session @ 01/22/23 06:22:20.148
  STEP: Tearing down the management cluster @ 01/22/23 06:22:20.32
[SynchronizedAfterSuite] PASSED [1.686 seconds]
------------------------------

Summarizing 5 Failures:
  [FAIL] Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:294
  [FAIL] When testing unhealthy machines remediation [It] Should successfully trigger machine deployment remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] When testing unhealthy machines remediation [It] Should successfully trigger KCP remediation
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/mhc_remediations.go:116

Ran 5 of 17 Specs in 3569.608 seconds
FAIL! - Suite Timeout Elapsed -- 0 Passed | 5 Failed | 1 Pending | 11 Skipped
--- FAIL: TestE2E (3569.61s)
FAIL

Ginkgo ran 1 suite in 1h0m24.044765646s

Test Suite Failed

real	60m24.069s
user	5m42.005s
sys	1m8.018s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-6367c23dd609c7cbe074f68b932f4c67994e7fe4" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-e162098fc94ffddee43e933f81051a8d450a365d" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...