This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-28 17:21
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 170 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:a37f6cdbad2f89ce5c42821700fb3283bf5d18fcce3f5370265dfc56eb17065f done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:a37f6cdbad2f89ce5c42821700fb3283bf5d18fcce3f5370265dfc56eb17065f done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 443 lines ...
  INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
  STEP: Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted @ 01/28/23 17:50:43.549
  INFO: Removing MachineDeploymentTopology from the Cluster Topology.
  INFO: Waiting for MachineDeployment to be deleted.
  STEP: PASSED! @ 01/28/23 17:50:53.641
  STEP: Dumping logs from the "clusterclass-changes-a5l0vw" workload cluster @ 01/28/23 17:50:53.641
Failed to get logs for Machine clusterclass-changes-a5l0vw-smcb7-vgkv9, Cluster clusterclass-changes-l72arx/clusterclass-changes-a5l0vw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-l72arx" namespace @ 01/28/23 17:50:55.736
  STEP: Deleting cluster clusterclass-changes-l72arx/clusterclass-changes-a5l0vw @ 01/28/23 17:50:56.032
  STEP: Deleting cluster clusterclass-changes-a5l0vw @ 01/28/23 17:50:56.053
  INFO: Waiting for the Cluster clusterclass-changes-l72arx/clusterclass-changes-a5l0vw to be deleted
  STEP: Waiting for cluster clusterclass-changes-a5l0vw to be deleted @ 01/28/23 17:50:56.065
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/28/23 17:51:16.081
... skipping 50 lines ...
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  STEP: PASSED! @ 01/28/23 17:56:47.559
  STEP: Dumping logs from the "md-rollout-vhd2mc" workload cluster @ 01/28/23 17:56:47.56
Failed to get logs for Machine md-rollout-vhd2mc-4n65q, Cluster md-rollout-okq4p5/md-rollout-vhd2mc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-vhd2mc-md-0-6dd996d978-x2796, Cluster md-rollout-okq4p5/md-rollout-vhd2mc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-okq4p5" namespace @ 01/28/23 17:56:51.908
  STEP: Deleting cluster md-rollout-okq4p5/md-rollout-vhd2mc @ 01/28/23 17:56:52.298
  STEP: Deleting cluster md-rollout-vhd2mc @ 01/28/23 17:56:52.321
  INFO: Waiting for the Cluster md-rollout-okq4p5/md-rollout-vhd2mc to be deleted
  STEP: Waiting for cluster md-rollout-vhd2mc to be deleted @ 01/28/23 17:56:52.335
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/28/23 17:57:22.356
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/28/23 18:00:04.074
  STEP: Checking all the machines controlled by quick-start-h76gem-md-0 are in the "<None>" failure domain @ 01/28/23 18:01:14.164
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/28/23 18:01:14.212
  STEP: Dumping logs from the "quick-start-h76gem" workload cluster @ 01/28/23 18:01:14.212
Failed to get logs for Machine quick-start-h76gem-m7vqk, Cluster quick-start-9ullyk/quick-start-h76gem: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-h76gem-md-0-544f4b5985-ndstm, Cluster quick-start-9ullyk/quick-start-h76gem: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-9ullyk" namespace @ 01/28/23 18:01:18.49
  STEP: Deleting cluster quick-start-9ullyk/quick-start-h76gem @ 01/28/23 18:01:18.766
  STEP: Deleting cluster quick-start-h76gem @ 01/28/23 18:01:18.783
  INFO: Waiting for the Cluster quick-start-9ullyk/quick-start-h76gem to be deleted
  STEP: Waiting for cluster quick-start-h76gem to be deleted @ 01/28/23 18:01:18.794
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/28/23 18:01:48.813
... skipping 50 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 01/28/23 18:07:20.694
  INFO: Scaling machine deployment md-scale-ea5wr2/md-scale-x8eztr-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/28/23 18:07:30.856
  STEP: Dumping logs from the "md-scale-x8eztr" workload cluster @ 01/28/23 18:07:30.856
Failed to get logs for Machine md-scale-x8eztr-md-0-686b447fd4-pt7gq, Cluster md-scale-ea5wr2/md-scale-x8eztr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-x8eztr-sv8n5, Cluster md-scale-ea5wr2/md-scale-x8eztr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-ea5wr2" namespace @ 01/28/23 18:07:35.236
  STEP: Deleting cluster md-scale-ea5wr2/md-scale-x8eztr @ 01/28/23 18:07:35.514
  STEP: Deleting cluster md-scale-x8eztr @ 01/28/23 18:07:35.53
  INFO: Waiting for the Cluster md-scale-ea5wr2/md-scale-x8eztr to be deleted
  STEP: Waiting for cluster md-scale-x8eztr to be deleted @ 01/28/23 18:07:35.543
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/28/23 18:08:05.562
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/28/23 18:10:26.827
  STEP: Checking all the machines controlled by quick-start-b9glg4-md-0 are in the "<None>" failure domain @ 01/28/23 18:11:26.901
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/28/23 18:11:26.938
  STEP: Dumping logs from the "quick-start-b9glg4" workload cluster @ 01/28/23 18:11:26.938
Failed to get logs for Machine quick-start-b9glg4-md-0-9b6d7d8b8-29dl4, Cluster quick-start-54osnn/quick-start-b9glg4: dialing host IP address at 192.168.6.77: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-b9glg4-vw6hv, Cluster quick-start-54osnn/quick-start-b9glg4: dialing host IP address at 192.168.6.12: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  STEP: Dumping all the Cluster API resources in the "quick-start-54osnn" namespace @ 01/28/23 18:11:29.404
  STEP: Deleting cluster quick-start-54osnn/quick-start-b9glg4 @ 01/28/23 18:11:29.686
  STEP: Deleting cluster quick-start-b9glg4 @ 01/28/23 18:11:29.705
  INFO: Waiting for the Cluster quick-start-54osnn/quick-start-b9glg4 to be deleted
  STEP: Waiting for cluster quick-start-b9glg4 to be deleted @ 01/28/23 18:11:29.717
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/28/23 18:11:59.737
... skipping 46 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/28/23 18:14:51.154
  STEP: Checking all the machines controlled by quick-start-0oyapo-md-0-tq4vn are in the "<None>" failure domain @ 01/28/23 18:15:31.209
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/28/23 18:15:31.253
  STEP: Dumping logs from the "quick-start-0oyapo" workload cluster @ 01/28/23 18:15:31.253
Failed to get logs for Machine quick-start-0oyapo-md-0-tq4vn-7fd9886c9b-wdnmf, Cluster quick-start-xcl532/quick-start-0oyapo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-0oyapo-rkqn2-4lmgv, Cluster quick-start-xcl532/quick-start-0oyapo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-xcl532" namespace @ 01/28/23 18:15:35.399
  STEP: Deleting cluster quick-start-xcl532/quick-start-0oyapo @ 01/28/23 18:15:35.712
  STEP: Deleting cluster quick-start-0oyapo @ 01/28/23 18:15:35.729
  INFO: Waiting for the Cluster quick-start-xcl532/quick-start-0oyapo to be deleted
  STEP: Waiting for cluster quick-start-0oyapo to be deleted @ 01/28/23 18:15:35.74
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/28/23 18:16:05.763
... skipping 56 lines ...
  STEP: Waiting for deployment node-drain-xowdjn-unevictable-workload/unevictable-pod-r15 to be available @ 01/28/23 18:23:52.639
  STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. @ 01/28/23 18:24:02.987
  INFO: Scaling controlplane node-drain-xowdjn/node-drain-jlwvqs from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83 @ 01/28/23 18:25:39.017
  STEP: Dumping logs from the "node-drain-jlwvqs" workload cluster @ 01/28/23 18:25:39.019
Failed to get logs for Machine node-drain-jlwvqs-8jp5c, Cluster node-drain-xowdjn/node-drain-jlwvqs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-jlwvqs-cmpdf, Cluster node-drain-xowdjn/node-drain-jlwvqs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-jlwvqs-mtz4q, Cluster node-drain-xowdjn/node-drain-jlwvqs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-xowdjn" namespace @ 01/28/23 18:25:45.623
  STEP: Deleting cluster node-drain-xowdjn/node-drain-jlwvqs @ 01/28/23 18:25:45.923
  STEP: Deleting cluster node-drain-jlwvqs @ 01/28/23 18:25:45.94
  INFO: Waiting for the Cluster node-drain-xowdjn/node-drain-jlwvqs to be deleted
  STEP: Waiting for cluster node-drain-jlwvqs to be deleted @ 01/28/23 18:25:45.952
  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:155 @ 01/28/23 18:26:09.019
... skipping 51 lines ...

Summarizing 1 Failure:
  [TIMEDOUT] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83

Ran 10 of 17 Specs in 3575.869 seconds
FAIL! - Suite Timeout Elapsed -- 9 Passed | 1 Failed | 1 Pending | 6 Skipped
--- FAIL: TestE2E (3575.87s)
FAIL

Ginkgo ran 1 suite in 1h0m31.790345413s

Test Suite Failed

real	60m31.813s
user	5m45.677s
sys	1m14.614s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-d944247fbaa518211ff8fd04fab994692acb0cad" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-48dd2491493f5a7f53ec27364152e6f3e2858286" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...