This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-02-02 05:22
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 175 lines ...
#18 exporting layers 0.4s done
#18 writing image sha256:125251886084e8c665c5d57b3c85f8a503e688caac0d7822a58c0e237ac736f8 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s

#10 [builder 1/6] FROM docker.io/library/golang:1.19.3@sha256:10e3c0f39f8e237baa5b66c5295c578cac42a99536cc9333d8505324a82407d9
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
\
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:125251886084e8c665c5d57b3c85f8a503e688caac0d7822a58c0e237ac736f8 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 314 lines ...
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  STEP: PASSED! @ 02/02/23 05:38:23.652
  STEP: Dumping logs from the "md-rollout-qg5nam" workload cluster @ 02/02/23 05:38:23.652
Failed to get logs for Machine md-rollout-qg5nam-md-0-79f95bcfd8-69kfb, Cluster md-rollout-b6dbmh/md-rollout-qg5nam: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-qg5nam-n4nwd, Cluster md-rollout-b6dbmh/md-rollout-qg5nam: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-b6dbmh" namespace @ 02/02/23 05:38:28.303
  STEP: Deleting cluster md-rollout-b6dbmh/md-rollout-qg5nam @ 02/02/23 05:38:28.586
  STEP: Deleting cluster md-rollout-qg5nam @ 02/02/23 05:38:28.609
  INFO: Waiting for the Cluster md-rollout-b6dbmh/md-rollout-qg5nam to be deleted
  STEP: Waiting for cluster md-rollout-qg5nam to be deleted @ 02/02/23 05:38:28.624
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 02/02/23 05:38:58.643
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 02/02/23 05:41:30.101
  STEP: Checking all the machines controlled by quick-start-1fp0al-md-0 are in the "<None>" failure domain @ 02/02/23 05:42:20.157
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 02/02/23 05:42:20.199
  STEP: Dumping logs from the "quick-start-1fp0al" workload cluster @ 02/02/23 05:42:20.199
Failed to get logs for Machine quick-start-1fp0al-hhnv4, Cluster quick-start-1ex5mm/quick-start-1fp0al: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-1fp0al-md-0-59ff5d89d7-glzlj, Cluster quick-start-1ex5mm/quick-start-1fp0al: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-1ex5mm" namespace @ 02/02/23 05:42:24.549
  STEP: Deleting cluster quick-start-1ex5mm/quick-start-1fp0al @ 02/02/23 05:42:24.874
  STEP: Deleting cluster quick-start-1fp0al @ 02/02/23 05:42:24.894
  INFO: Waiting for the Cluster quick-start-1ex5mm/quick-start-1fp0al to be deleted
  STEP: Waiting for cluster quick-start-1fp0al to be deleted @ 02/02/23 05:42:24.909
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 02/02/23 05:42:54.929
... skipping 50 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 02/02/23 05:48:36.892
  INFO: Scaling machine deployment md-scale-8vpt84/md-scale-o7ghhn-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 02/02/23 05:48:46.996
  STEP: Dumping logs from the "md-scale-o7ghhn" workload cluster @ 02/02/23 05:48:46.996
Failed to get logs for Machine md-scale-o7ghhn-md-0-664c546658-k4kj2, Cluster md-scale-8vpt84/md-scale-o7ghhn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-o7ghhn-mxt9v, Cluster md-scale-8vpt84/md-scale-o7ghhn: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-8vpt84" namespace @ 02/02/23 05:48:51.602
  STEP: Deleting cluster md-scale-8vpt84/md-scale-o7ghhn @ 02/02/23 05:48:51.867
  STEP: Deleting cluster md-scale-o7ghhn @ 02/02/23 05:48:51.889
  INFO: Waiting for the Cluster md-scale-8vpt84/md-scale-o7ghhn to be deleted
  STEP: Waiting for cluster md-scale-o7ghhn to be deleted @ 02/02/23 05:48:51.902
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 02/02/23 05:49:21.92
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 02/02/23 05:52:13.291
  STEP: Checking all the machines controlled by quick-start-3ll45u-md-0-5vjh5 are in the "<None>" failure domain @ 02/02/23 05:53:23.374
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 02/02/23 05:53:23.415
  STEP: Dumping logs from the "quick-start-3ll45u" workload cluster @ 02/02/23 05:53:23.415
Failed to get logs for Machine quick-start-3ll45u-md-0-5vjh5-57776c9b4b-gxtwj, Cluster quick-start-jjd3uy/quick-start-3ll45u: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-3ll45u-vq7kg-2qt9h, Cluster quick-start-jjd3uy/quick-start-3ll45u: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-jjd3uy" namespace @ 02/02/23 05:53:27.859
  STEP: Deleting cluster quick-start-jjd3uy/quick-start-3ll45u @ 02/02/23 05:53:28.151
  STEP: Deleting cluster quick-start-3ll45u @ 02/02/23 05:53:28.174
  INFO: Waiting for the Cluster quick-start-jjd3uy/quick-start-3ll45u to be deleted
  STEP: Waiting for cluster quick-start-3ll45u to be deleted @ 02/02/23 05:53:28.185
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 02/02/23 05:53:58.206
... skipping 227 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 02/02/23 06:14:15.941
  STEP: Dumping logs from the "mhc-remediation-peybv1" workload cluster @ 02/02/23 06:14:15.941
Failed to get logs for Machine mhc-remediation-peybv1-csl86, Cluster mhc-remediation-5269gs/mhc-remediation-peybv1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-peybv1-md-0-55f46bfb49-pjszm, Cluster mhc-remediation-5269gs/mhc-remediation-peybv1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-5269gs" namespace @ 02/02/23 06:14:20.439
  STEP: Deleting cluster mhc-remediation-5269gs/mhc-remediation-peybv1 @ 02/02/23 06:14:20.711
  STEP: Deleting cluster mhc-remediation-peybv1 @ 02/02/23 06:14:20.731
  INFO: Waiting for the Cluster mhc-remediation-5269gs/mhc-remediation-peybv1 to be deleted
  STEP: Waiting for cluster mhc-remediation-peybv1 to be deleted @ 02/02/23 06:14:20.751
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 02/02/23 06:14:50.771
... skipping 54 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 02/02/23 06:24:18.421
  STEP: Dumping logs from the "mhc-remediation-pz6t4z" workload cluster @ 02/02/23 06:24:18.422
Failed to get logs for Machine mhc-remediation-pz6t4z-7zfhl, Cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-pz6t4z-dg27q, Cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-pz6t4z-md-0-5cdc9db75f-kp88r, Cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-pz6t4z-vpn26, Cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-onpmpv" namespace @ 02/02/23 06:24:26.304
  STEP: Deleting cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z @ 02/02/23 06:24:26.607
  STEP: Deleting cluster mhc-remediation-pz6t4z @ 02/02/23 06:24:26.624
  INFO: Waiting for the Cluster mhc-remediation-onpmpv/mhc-remediation-pz6t4z to be deleted
  STEP: Waiting for cluster mhc-remediation-pz6t4z to be deleted @ 02/02/23 06:24:26.637
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 02/02/23 06:25:16.668
... skipping 36 lines ...
  STEP: Waiting for cluster to enter the provisioned phase @ 02/02/23 06:25:17.81
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-ablv2q/quick-start-gq3lst to be provisioned
  STEP: Waiting for one control plane node to exist @ 02/02/23 06:25:37.844
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78 @ 02/02/23 06:27:02.159
  STEP: Dumping logs from the "quick-start-gq3lst" workload cluster @ 02/02/23 06:27:02.161
Failed to get logs for Machine quick-start-gq3lst-bhwrv, Cluster quick-start-ablv2q/quick-start-gq3lst: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine quick-start-gq3lst-md-0-58bb69ff65-t4nt5, Cluster quick-start-ablv2q/quick-start-gq3lst: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "quick-start-ablv2q" namespace @ 02/02/23 06:27:02.233
  STEP: Deleting cluster quick-start-ablv2q/quick-start-gq3lst @ 02/02/23 06:27:02.518
  STEP: Deleting cluster quick-start-gq3lst @ 02/02/23 06:27:02.534
  INFO: Waiting for the Cluster quick-start-ablv2q/quick-start-gq3lst to be deleted
  STEP: Waiting for cluster quick-start-gq3lst to be deleted @ 02/02/23 06:27:02.542
  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:109 @ 02/02/23 06:27:32.161
... skipping 83 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001be69c0, 0x3a}, {0xc001be6a00, 0x35}, {0xc000eeed40, 0x1d}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 24229 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00108da70, 0x28}, {0xc00108daa0, 0x23}, {0xc000dc72b4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 21 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001be7a40, 0x3e}, {0xc001be7a80, 0x39}, {0xc001ca78c0, 0x21}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 6 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 24215 [sync.Cond.Wait]
... skipping 18 lines ...
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc002016270, 0x27}, {0xc0020162a0, 0x22}, {0xc0004177a4, 0xb}, ...}, ...}, ...)
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:186
              | out := bufio.NewWriter(f)
              | defer out.Flush()
              > _, err = out.ReadFrom(podLogs)
              | if err != nil && err != io.ErrUnexpectedEOF {
              | 	// Failing to stream logs should not cause the test to fail
        > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:161
              | 
              | // Watch each container's logs in a goroutine so we can stream them all concurrently.
              > go func(pod corev1.Pod, container corev1.Container) {
              | 	defer GinkgoRecover()
... skipping 29 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

        goroutine 24222 [select]
... skipping 3 lines ...
              | for {
              > 	select {
              | 	case <-ctx.Done():
              | 		return
        > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics
            /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/deployment_helpers.go:225
              | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment))
              | 
              > go func() {
              | 	defer GinkgoRecover()
              | 	for {

  There were additional failures detected.  To view them in detail run ginkgo -vv
... skipping 10 lines ...

Summarizing 1 Failure:
  [TIMEDOUT] Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

Ran 11 of 17 Specs in 3578.438 seconds
FAIL! - Suite Timeout Elapsed -- 10 Passed | 1 Failed | 1 Pending | 5 Skipped
--- FAIL: TestE2E (3578.44s)
FAIL

Ginkgo ran 1 suite in 1h0m31.795680412s

Test Suite Failed

real	60m31.817s
user	5m36.875s
sys	1m9.101s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-bbf112f6a40d6db1d86baabb93117c38982882f3" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-5969a58065ae3196aaae6e87d24d0cdd5d654ff0" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...