This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-28 05:21
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 186 lines ...
#18 exporting layers 0.4s done
#18 writing image sha256:3890c6bf98783c931007ad64ee11a885f3c977b5516d1aa2b15fa299a0c5d8c3 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s

#10 [builder 1/6] FROM docker.io/library/golang:1.19.3@sha256:10e3c0f39f8e237baa5b66c5295c578cac42a99536cc9333d8505324a82407d9
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:3890c6bf98783c931007ad64ee11a885f3c977b5516d1aa2b15fa299a0c5d8c3 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 317 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/28/23 05:39:00.419
  STEP: Dumping logs from the "mhc-remediation-97wlz4" workload cluster @ 01/28/23 05:39:00.419
Failed to get logs for Machine mhc-remediation-97wlz4-c4s5s, Cluster mhc-remediation-8bkmp7/mhc-remediation-97wlz4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-97wlz4-md-0-546f8d7bfc-8d49q, Cluster mhc-remediation-8bkmp7/mhc-remediation-97wlz4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-8bkmp7" namespace @ 01/28/23 05:39:04.913
  STEP: Deleting cluster mhc-remediation-8bkmp7/mhc-remediation-97wlz4 @ 01/28/23 05:39:05.203
  STEP: Deleting cluster mhc-remediation-97wlz4 @ 01/28/23 05:39:05.219
  INFO: Waiting for the Cluster mhc-remediation-8bkmp7/mhc-remediation-97wlz4 to be deleted
  STEP: Waiting for cluster mhc-remediation-97wlz4 to be deleted @ 01/28/23 05:39:05.232
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/28/23 05:39:35.252
... skipping 54 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/28/23 05:49:19.546
  STEP: Dumping logs from the "mhc-remediation-9u4p02" workload cluster @ 01/28/23 05:49:19.546
Failed to get logs for Machine mhc-remediation-9u4p02-md-0-66cf5d4885-prgjf, Cluster mhc-remediation-ussez1/mhc-remediation-9u4p02: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-9u4p02-r94cj, Cluster mhc-remediation-ussez1/mhc-remediation-9u4p02: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-9u4p02-v5qlb, Cluster mhc-remediation-ussez1/mhc-remediation-9u4p02: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-9u4p02-vjd52, Cluster mhc-remediation-ussez1/mhc-remediation-9u4p02: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-ussez1" namespace @ 01/28/23 05:49:27.254
  STEP: Deleting cluster mhc-remediation-ussez1/mhc-remediation-9u4p02 @ 01/28/23 05:49:27.564
  STEP: Deleting cluster mhc-remediation-9u4p02 @ 01/28/23 05:49:27.581
  INFO: Waiting for the Cluster mhc-remediation-ussez1/mhc-remediation-9u4p02 to be deleted
  STEP: Waiting for cluster mhc-remediation-9u4p02 to be deleted @ 01/28/23 05:49:27.594
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/28/23 05:50:07.623
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/28/23 05:52:38.992
  STEP: Checking all the machines controlled by quick-start-tgy74o-md-0-swd26 are in the "<None>" failure domain @ 01/28/23 05:54:09.098
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/28/23 05:54:09.135
  STEP: Dumping logs from the "quick-start-tgy74o" workload cluster @ 01/28/23 05:54:09.135
Failed to get logs for Machine quick-start-tgy74o-md-0-swd26-585f54dc8f-2cv8l, Cluster quick-start-4x5wdb/quick-start-tgy74o: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-tgy74o-q8vg7-6hjhp, Cluster quick-start-4x5wdb/quick-start-tgy74o: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-4x5wdb" namespace @ 01/28/23 05:54:13.53
  STEP: Deleting cluster quick-start-4x5wdb/quick-start-tgy74o @ 01/28/23 05:54:13.834
  STEP: Deleting cluster quick-start-tgy74o @ 01/28/23 05:54:13.853
  INFO: Waiting for the Cluster quick-start-4x5wdb/quick-start-tgy74o to be deleted
  STEP: Waiting for cluster quick-start-tgy74o to be deleted @ 01/28/23 05:54:13.868
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/28/23 05:54:43.889
... skipping 99 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/28/23 06:00:08.601
  STEP: Checking all the machines controlled by quick-start-twxp50-md-0 are in the "<None>" failure domain @ 01/28/23 06:01:48.717
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/28/23 06:01:48.754
  STEP: Dumping logs from the "quick-start-twxp50" workload cluster @ 01/28/23 06:01:48.754
Failed to get logs for Machine quick-start-twxp50-gmkvk, Cluster quick-start-99n77w/quick-start-twxp50: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-twxp50-md-0-68c87f6b57-5jhvz, Cluster quick-start-99n77w/quick-start-twxp50: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-99n77w" namespace @ 01/28/23 06:01:53.527
  STEP: Deleting cluster quick-start-99n77w/quick-start-twxp50 @ 01/28/23 06:01:53.811
  STEP: Deleting cluster quick-start-twxp50 @ 01/28/23 06:01:53.827
  INFO: Waiting for the Cluster quick-start-99n77w/quick-start-twxp50 to be deleted
  STEP: Waiting for cluster quick-start-twxp50 to be deleted @ 01/28/23 06:01:53.843
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/28/23 06:02:23.864
... skipping 50 lines ...
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  STEP: PASSED! @ 01/28/23 06:08:55.427
  STEP: Dumping logs from the "md-rollout-faxhf7" workload cluster @ 01/28/23 06:08:55.428
Failed to get logs for Machine md-rollout-faxhf7-h4l28, Cluster md-rollout-7zu8y0/md-rollout-faxhf7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-faxhf7-md-0-57b769888f-qrm6r, Cluster md-rollout-7zu8y0/md-rollout-faxhf7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-7zu8y0" namespace @ 01/28/23 06:08:59.721
  STEP: Deleting cluster md-rollout-7zu8y0/md-rollout-faxhf7 @ 01/28/23 06:09:00.01
  STEP: Deleting cluster md-rollout-faxhf7 @ 01/28/23 06:09:00.03
  INFO: Waiting for the Cluster md-rollout-7zu8y0/md-rollout-faxhf7 to be deleted
  STEP: Waiting for cluster md-rollout-faxhf7 to be deleted @ 01/28/23 06:09:00.043
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/28/23 06:09:30.06
... skipping 57 lines ...
  INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
  STEP: Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted @ 01/28/23 06:13:31.823
  INFO: Removing MachineDeploymentTopology from the Cluster Topology.
  INFO: Waiting for MachineDeployment to be deleted.
  STEP: PASSED! @ 01/28/23 06:13:41.9
  STEP: Dumping logs from the "clusterclass-changes-7tfun9" workload cluster @ 01/28/23 06:13:41.9
Failed to get logs for Machine clusterclass-changes-7tfun9-g55dx-7x227, Cluster clusterclass-changes-af8to2/clusterclass-changes-7tfun9: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "clusterclass-changes-af8to2" namespace @ 01/28/23 06:13:44.248
  STEP: Deleting cluster clusterclass-changes-af8to2/clusterclass-changes-7tfun9 @ 01/28/23 06:13:44.541
  STEP: Deleting cluster clusterclass-changes-7tfun9 @ 01/28/23 06:13:44.563
  INFO: Waiting for the Cluster clusterclass-changes-af8to2/clusterclass-changes-7tfun9 to be deleted
  STEP: Waiting for cluster clusterclass-changes-7tfun9 to be deleted @ 01/28/23 06:13:44.574
  STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec @ 01/28/23 06:14:04.59
... skipping 57 lines ...
  STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. @ 01/28/23 06:21:31.985
  INFO: Scaling controlplane node-drain-rad3zp/node-drain-ltnmyr from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83 @ 01/28/23 06:25:31.867
  STEP: Dumping logs from the "node-drain-ltnmyr" workload cluster @ 01/28/23 06:25:31.868
  STEP: PASSED! @ 01/28/23 06:25:32.638
Failed to get logs for Machine node-drain-ltnmyr-fkrfx, Cluster node-drain-rad3zp/node-drain-ltnmyr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-rad3zp" namespace @ 01/28/23 06:25:34.035
  STEP: Deleting cluster node-drain-rad3zp/node-drain-ltnmyr @ 01/28/23 06:25:34.436
  STEP: Deleting cluster node-drain-ltnmyr @ 01/28/23 06:25:34.461
  INFO: Waiting for the Cluster node-drain-rad3zp/node-drain-ltnmyr to be deleted
  STEP: Waiting for cluster node-drain-ltnmyr to be deleted @ 01/28/23 06:25:34.482
  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:155 @ 01/28/23 06:26:01.869
... skipping 51 lines ...

Summarizing 1 Failure:
  [TIMEDOUT] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83

Ran 9 of 17 Specs in 3577.745 seconds
FAIL! - Suite Timeout Elapsed -- 8 Passed | 1 Failed | 1 Pending | 7 Skipped
--- FAIL: TestE2E (3577.75s)
FAIL

Ginkgo ran 1 suite in 1h0m31.488699347s

Test Suite Failed

real	60m31.508s
user	5m42.097s
sys	1m2.279s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-c45c1121e06e48d7f569f169e067b958543720e6" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-887cee2d81d52353b6f8ad65feef7468875de022" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...