This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-27 17:20
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 179 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:9fbfab6784ec65fb2fd93e6f7217680c929b7e336c547da71cfaa870b82d16b0 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.5s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:9fbfab6784ec65fb2fd93e6f7217680c929b7e336c547da71cfaa870b82d16b0 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 317 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 01/27/23 17:38:45.474
  INFO: Scaling machine deployment md-scale-s5u7mw/md-scale-1p7e73-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/27/23 17:38:55.589
  STEP: Dumping logs from the "md-scale-1p7e73" workload cluster @ 01/27/23 17:38:55.589
Failed to get logs for Machine md-scale-1p7e73-md-0-c45d6db5d-kntft, Cluster md-scale-s5u7mw/md-scale-1p7e73: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-1p7e73-nzmj2, Cluster md-scale-s5u7mw/md-scale-1p7e73: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-s5u7mw" namespace @ 01/27/23 17:39:00.043
  STEP: Deleting cluster md-scale-s5u7mw/md-scale-1p7e73 @ 01/27/23 17:39:00.317
  STEP: Deleting cluster md-scale-1p7e73 @ 01/27/23 17:39:00.335
  INFO: Waiting for the Cluster md-scale-s5u7mw/md-scale-1p7e73 to be deleted
  STEP: Waiting for cluster md-scale-1p7e73 to be deleted @ 01/27/23 17:39:00.347
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/27/23 17:39:30.365
... skipping 57 lines ...
  INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
  STEP: Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted @ 01/27/23 17:43:32.35
  INFO: Removing MachineDeploymentTopology from the Cluster Topology.
  INFO: Waiting for MachineDeployment to be deleted.
  STEP: PASSED! @ 01/27/23 17:43:42.429
  STEP: Dumping logs from the "clusterclass-changes-31w9i0" workload cluster @ 01/27/23 17:43:42.429
Failed to get logs for Machine clusterclass-changes-31w9i0-mq5gv-j4kcw, Cluster clusterclass-changes-8elim4/clusterclass-changes-31w9i0: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-8elim4" namespace @ 01/27/23 17:43:44.454
  STEP: Deleting cluster clusterclass-changes-8elim4/clusterclass-changes-31w9i0 @ 01/27/23 17:43:44.778
  STEP: Deleting cluster clusterclass-changes-31w9i0 @ 01/27/23 17:43:44.797
  INFO: Waiting for the Cluster clusterclass-changes-8elim4/clusterclass-changes-31w9i0 to be deleted
  STEP: Waiting for cluster clusterclass-changes-31w9i0 to be deleted @ 01/27/23 17:43:44.812
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/27/23 17:44:04.826
... skipping 34 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/27/23 17:44:05.969
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-pqjei1/storage-policy-zxa190 to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/27/23 17:44:56.019
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/27/23 17:54:56.022
  STEP: Dumping all the Cluster API resources in the "capv-e2e-pqjei1" namespace @ 01/27/23 17:54:56.022
  STEP: cleaning up namespace: capv-e2e-pqjei1 @ 01/27/23 17:54:56.281
  STEP: Deleting cluster storage-policy-zxa190 @ 01/27/23 17:54:56.297
  INFO: Waiting for the Cluster capv-e2e-pqjei1/storage-policy-zxa190 to be deleted
  STEP: Waiting for cluster storage-policy-zxa190 to be deleted @ 01/27/23 17:54:56.31
  STEP: Deleting namespace used for hosting test spec @ 01/27/23 17:55:16.325
  INFO: Deleting namespace capv-e2e-pqjei1
• [FAILED] [671.499 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.002s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/27/23 17:54:56.022
------------------------------
... skipping 166 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/27/23 18:10:16.392
  STEP: Checking all the machines controlled by quick-start-f6a5fs-md-0 are in the "<None>" failure domain @ 01/27/23 18:11:16.458
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/27/23 18:11:16.496
  STEP: Dumping logs from the "quick-start-f6a5fs" workload cluster @ 01/27/23 18:11:16.497
Failed to get logs for Machine quick-start-f6a5fs-md-0-b4b897d88-w6dd2, Cluster quick-start-4qk4di/quick-start-f6a5fs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-f6a5fs-t57f5, Cluster quick-start-4qk4di/quick-start-f6a5fs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-4qk4di" namespace @ 01/27/23 18:11:20.651
  STEP: Deleting cluster quick-start-4qk4di/quick-start-f6a5fs @ 01/27/23 18:11:20.96
  STEP: Deleting cluster quick-start-f6a5fs @ 01/27/23 18:11:20.981
  INFO: Waiting for the Cluster quick-start-4qk4di/quick-start-f6a5fs to be deleted
  STEP: Waiting for cluster quick-start-f6a5fs to be deleted @ 01/27/23 18:11:20.995
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/27/23 18:11:51.012
... skipping 56 lines ...
  STEP: Waiting for deployment node-drain-18s55u-unevictable-workload/unevictable-pod-7a1 to be available @ 01/27/23 18:19:17.809
  STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. @ 01/27/23 18:19:28.097
  INFO: Scaling controlplane node-drain-18s55u/node-drain-ujh84h from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/27/23 18:23:08.669
  STEP: Dumping logs from the "node-drain-ujh84h" workload cluster @ 01/27/23 18:23:08.669
Failed to get logs for Machine node-drain-ujh84h-sw9g7, Cluster node-drain-18s55u/node-drain-ujh84h: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-18s55u" namespace @ 01/27/23 18:23:10.787
  STEP: Deleting cluster node-drain-18s55u/node-drain-ujh84h @ 01/27/23 18:23:11.081
  STEP: Deleting cluster node-drain-ujh84h @ 01/27/23 18:23:11.099
  INFO: Waiting for the Cluster node-drain-18s55u/node-drain-ujh84h to be deleted
  STEP: Waiting for cluster node-drain-ujh84h to be deleted @ 01/27/23 18:23:11.115
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/27/23 18:23:41.134
... skipping 115 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32279 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00224cd20, 0x27}, {0xc00224cd50, 0x22}, {0xc00272f780, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32297 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001060080, 0x3e}, {0xc0010600c0, 0x39}, {0xc0022a04b0, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002645950, 0x28}, {0xc002645980, 0x23}, {0xc0017c10e4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001675480, 0x3a}, {0xc0016754c0, 0x35}, {0xc000955980, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 32281 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 72685 [chan receive, 1 minutes]
... skipping 33 lines ...
  STEP: Cleaning up the vSphere session @ 01/27/23 18:25:32.072
  STEP: Tearing down the management cluster @ 01/27/23 18:25:32.192
[SynchronizedAfterSuite] PASSED [1.501 seconds]
------------------------------

Summarizing 2 Failures:
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [TIMEDOUT] DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured [It] Only configures the network with the provided nameservers
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:66

Ran 9 of 17 Specs in 3577.807 seconds
FAIL! - Suite Timeout Elapsed -- 7 Passed | 2 Failed | 1 Pending | 7 Skipped
--- FAIL: TestE2E (3577.81s)
FAIL

Ginkgo ran 1 suite in 1h0m31.59663468s

Test Suite Failed

real	60m31.617s
user	5m38.279s
sys	1m4.158s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-3016ecdebfd7e323971943203c58c92bcabd0828" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-6477fd7a4b476f9fec67390fcb58a24611e6e294" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...