This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-31 17:22
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 175 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:27ae03709d1284232610f0b0626faa4ba6d909b0b669ce05f9096ab2ced66ed2 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 135 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:27ae03709d1284232610f0b0626faa4ba6d909b0b669ce05f9096ab2ced66ed2 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 246 lines ...
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-s1fuq1/storage-policy-d4t9aa to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/31/23 17:31:44.837
  INFO: Waiting for control plane to be ready
  INFO: Waiting for control plane capv-e2e-s1fuq1/storage-policy-d4t9aa to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 01/31/23 17:36:45.036
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:176 @ 01/31/23 17:46:45.041
  STEP: Dumping all the Cluster API resources in the "capv-e2e-s1fuq1" namespace @ 01/31/23 17:46:45.041
  STEP: cleaning up namespace: capv-e2e-s1fuq1 @ 01/31/23 17:46:45.308
  STEP: Deleting cluster storage-policy-d4t9aa @ 01/31/23 17:46:45.327
  INFO: Waiting for the Cluster capv-e2e-s1fuq1/storage-policy-d4t9aa to be deleted
  STEP: Waiting for cluster storage-policy-d4t9aa to be deleted @ 01/31/23 17:46:45.342
  STEP: Deleting namespace used for hosting test spec @ 01/31/23 17:47:05.357
  INFO: Deleting namespace capv-e2e-s1fuq1
• [FAILED] [953.183 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.003s.
  {
    "metadata": {
      "creationTimestamp": null
    },
    "spec": {
      "version": "",
... skipping 58 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/31/23 17:47:06.593
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-yqu6wf/quick-start-63iaid-tw9nl to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/31/23 17:47:26.647
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 17:57:26.652
  STEP: Dumping logs from the "quick-start-63iaid" workload cluster @ 01/31/23 17:57:26.652
Failed to get logs for Machine quick-start-63iaid-md-0-mwz2k-84fc44d75d-mw5hb, Cluster quick-start-yqu6wf/quick-start-63iaid: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-63iaid-tw9nl-hrfxq, Cluster quick-start-yqu6wf/quick-start-63iaid: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-yqu6wf" namespace @ 01/31/23 17:57:28.851
  STEP: Deleting cluster quick-start-yqu6wf/quick-start-63iaid @ 01/31/23 17:57:29.141
  STEP: Deleting cluster quick-start-63iaid @ 01/31/23 17:57:29.162
  INFO: Waiting for the Cluster quick-start-yqu6wf/quick-start-63iaid to be deleted
  STEP: Waiting for cluster quick-start-63iaid to be deleted @ 01/31/23 17:57:29.179
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/31/23 17:57:49.195
  INFO: Deleting namespace quick-start-yqu6wf
• [FAILED] [643.832 seconds]
ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.004s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 17:57:26.652
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/31/23 17:57:50.443
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by clusterclass-changes-fyolbo/clusterclass-changes-fjjqec-dpjnp to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/31/23 17:58:10.5
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 18:08:10.502
  STEP: Dumping logs from the "clusterclass-changes-fjjqec" workload cluster @ 01/31/23 18:08:10.502
Failed to get logs for Machine clusterclass-changes-fjjqec-dpjnp-nvxg8, Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine clusterclass-changes-fjjqec-md-0-d94z2-6497556597-6bqgg, Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-fyolbo" namespace @ 01/31/23 18:08:12.759
  STEP: Deleting cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec @ 01/31/23 18:08:13.113
  STEP: Deleting cluster clusterclass-changes-fjjqec @ 01/31/23 18:08:13.134
  INFO: Waiting for the Cluster clusterclass-changes-fyolbo/clusterclass-changes-fjjqec to be deleted
  STEP: Waiting for cluster clusterclass-changes-fjjqec to be deleted @ 01/31/23 18:08:13.146
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/31/23 18:08:33.163
  INFO: Deleting namespace clusterclass-changes-fyolbo
• [FAILED] [643.970 seconds]
When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/clusterclass_changes.go:132

  [FAILED] Timed out after 600.001s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 18:08:10.502
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/31/23 18:08:34.347
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by md-rollout-aan4vo/md-rollout-kv157p to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/31/23 18:08:54.383
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 18:18:54.386
  STEP: Dumping logs from the "md-rollout-kv157p" workload cluster @ 01/31/23 18:18:54.386
Failed to get logs for Machine md-rollout-kv157p-md-0-86497c7b56-njdww, Cluster md-rollout-aan4vo/md-rollout-kv157p: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-rollout-kv157p-mzcvg, Cluster md-rollout-aan4vo/md-rollout-kv157p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-aan4vo" namespace @ 01/31/23 18:18:56.552
  STEP: Deleting cluster md-rollout-aan4vo/md-rollout-kv157p @ 01/31/23 18:18:56.84
  STEP: Deleting cluster md-rollout-kv157p @ 01/31/23 18:18:56.857
  INFO: Waiting for the Cluster md-rollout-aan4vo/md-rollout-kv157p to be deleted
  STEP: Waiting for cluster md-rollout-kv157p to be deleted @ 01/31/23 18:18:56.87
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/31/23 18:19:16.885
  INFO: Deleting namespace md-rollout-aan4vo
• [FAILED] [643.724 seconds]
ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71

  [FAILED] Timed out after 600.002s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/31/23 18:18:54.386
------------------------------
... skipping 113 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33042 [sync.Cond.Wait, 2 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c8f30, 0x28}, {0xc0021c8f60, 0x23}, {0xc0021c0704, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 44 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b41400, 0x3e}, {0xc000b41440, 0x39}, {0xc001a21470, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32993 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32990 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001da7980, 0x27}, {0xc001da79b0, 0x22}, {0xc002a91e70, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 33003 [sync.Cond.Wait, 8 minutes]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0012953c0, 0x3a}, {0xc001295400, 0x35}, {0xc0023b1b20, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 11 lines ...
  STEP: Cleaning up the vSphere session @ 01/31/23 18:27:50.266
  STEP: Tearing down the management cluster @ 01/31/23 18:27:50.481
[SynchronizedAfterSuite] PASSED [1.580 seconds]
------------------------------

Summarizing 5 Failures:
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:176
  [FAIL] ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured [It] Only configures the network with the provided nameservers
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:66

Ran 5 of 17 Specs in 3515.826 seconds
FAIL! - Suite Timeout Elapsed -- 0 Passed | 5 Failed | 1 Pending | 11 Skipped
--- FAIL: TestE2E (3515.83s)
FAIL

Ginkgo ran 1 suite in 1h0m22.095758641s

Test Suite Failed

real	60m22.150s
user	9m19.891s
sys	2m38.647s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-a0c1f281d3cd33ef78622f17b496197096595ddd" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-b82287faf9b5fdd5259939056e7f7e7ccde54cba" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...