This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-16 05:15
Elapsed1h4m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 594 lines ...
  STEP: Waiting for deployment node-drain-529xzw-unevictable-workload/unevictable-pod-b70 to be available @ 01/16/23 05:29:27.594
  STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. @ 01/16/23 05:29:37.976
  INFO: Scaling controlplane node-drain-529xzw/node-drain-ufpwm6 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/16/23 05:33:08.504
  STEP: Dumping logs from the "node-drain-ufpwm6" workload cluster @ 01/16/23 05:33:08.504
Failed to get logs for Machine node-drain-ufpwm6-cd9jl, Cluster node-drain-529xzw/node-drain-ufpwm6: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "node-drain-529xzw" namespace @ 01/16/23 05:33:10.792
  STEP: Deleting cluster node-drain-529xzw/node-drain-ufpwm6 @ 01/16/23 05:33:11.083
  STEP: Deleting cluster node-drain-ufpwm6 @ 01/16/23 05:33:11.101
  INFO: Waiting for the Cluster node-drain-529xzw/node-drain-ufpwm6 to be deleted
  STEP: Waiting for cluster node-drain-ufpwm6 to be deleted @ 01/16/23 05:33:11.113
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/16/23 05:33:41.133
... skipping 58 lines ...
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  STEP: PASSED! @ 01/16/23 05:39:32.81
  STEP: Dumping logs from the "md-rollout-wu4crw" workload cluster @ 01/16/23 05:39:32.81
Failed to get logs for Machine md-rollout-wu4crw-j25rg, Cluster md-rollout-sjzldc/md-rollout-wu4crw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-wu4crw-md-0-5844567744-n5fsk, Cluster md-rollout-sjzldc/md-rollout-wu4crw: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-sjzldc" namespace @ 01/16/23 05:39:37.359
  STEP: Deleting cluster md-rollout-sjzldc/md-rollout-wu4crw @ 01/16/23 05:39:37.65
  STEP: Deleting cluster md-rollout-wu4crw @ 01/16/23 05:39:37.67
  INFO: Waiting for the Cluster md-rollout-sjzldc/md-rollout-wu4crw to be deleted
  STEP: Waiting for cluster md-rollout-wu4crw to be deleted @ 01/16/23 05:39:37.683
  STEP: Deleting namespace used for hosting the "md-rollout" test spec @ 01/16/23 05:40:07.699
... skipping 106 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/16/23 05:50:41.075
  STEP: Dumping logs from the "mhc-remediation-w8r7pt" workload cluster @ 01/16/23 05:50:41.075
Failed to get logs for Machine mhc-remediation-w8r7pt-md-0-5b77887d56-4252f, Cluster mhc-remediation-kvobuh/mhc-remediation-w8r7pt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-w8r7pt-p9w48, Cluster mhc-remediation-kvobuh/mhc-remediation-w8r7pt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-kvobuh" namespace @ 01/16/23 05:50:45.399
  STEP: Deleting cluster mhc-remediation-kvobuh/mhc-remediation-w8r7pt @ 01/16/23 05:50:45.673
  STEP: Deleting cluster mhc-remediation-w8r7pt @ 01/16/23 05:50:45.688
  INFO: Waiting for the Cluster mhc-remediation-kvobuh/mhc-remediation-w8r7pt to be deleted
  STEP: Waiting for cluster mhc-remediation-w8r7pt to be deleted @ 01/16/23 05:50:45.7
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/16/23 05:51:15.718
... skipping 54 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/16/23 06:00:05.15
  STEP: Dumping logs from the "mhc-remediation-xcbzzc" workload cluster @ 01/16/23 06:00:05.151
Failed to get logs for Machine mhc-remediation-xcbzzc-gj47j, Cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-xcbzzc-md-0-c75ffd79-w4qr2, Cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-xcbzzc-p4sfl, Cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-xcbzzc-qppl8, Cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-nbb64o" namespace @ 01/16/23 06:00:13.057
  STEP: Deleting cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc @ 01/16/23 06:00:13.382
  STEP: Deleting cluster mhc-remediation-xcbzzc @ 01/16/23 06:00:13.403
  INFO: Waiting for the Cluster mhc-remediation-nbb64o/mhc-remediation-xcbzzc to be deleted
  STEP: Waiting for cluster mhc-remediation-xcbzzc to be deleted @ 01/16/23 06:00:13.419
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/16/23 06:01:03.447
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/16/23 06:04:04.831
  STEP: Checking all the machines controlled by quick-start-cd0bcn-md-0 are in the "<None>" failure domain @ 01/16/23 06:05:04.909
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/16/23 06:05:04.95
  STEP: Dumping logs from the "quick-start-cd0bcn" workload cluster @ 01/16/23 06:05:04.95
Failed to get logs for Machine quick-start-cd0bcn-dvrvl, Cluster quick-start-ugz3zl/quick-start-cd0bcn: dialing host IP address at 192.168.6.122: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-cd0bcn-md-0-57ff886df-6zf5l, Cluster quick-start-ugz3zl/quick-start-cd0bcn: dialing host IP address at 192.168.6.33: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  STEP: Dumping all the Cluster API resources in the "quick-start-ugz3zl" namespace @ 01/16/23 06:05:07.611
  STEP: Deleting cluster quick-start-ugz3zl/quick-start-cd0bcn @ 01/16/23 06:05:07.896
  STEP: Deleting cluster quick-start-cd0bcn @ 01/16/23 06:05:07.916
  INFO: Waiting for the Cluster quick-start-ugz3zl/quick-start-cd0bcn to be deleted
  STEP: Waiting for cluster quick-start-cd0bcn to be deleted @ 01/16/23 06:05:07.93
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/16/23 06:05:37.952
... skipping 50 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 01/16/23 06:11:09.863
  INFO: Scaling machine deployment md-scale-3fwejj/md-scale-ex7nxs-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/16/23 06:11:19.98
  STEP: Dumping logs from the "md-scale-ex7nxs" workload cluster @ 01/16/23 06:11:19.98
Failed to get logs for Machine md-scale-ex7nxs-jl7km, Cluster md-scale-3fwejj/md-scale-ex7nxs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-ex7nxs-md-0-db97b959b-xr5tb, Cluster md-scale-3fwejj/md-scale-ex7nxs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-3fwejj" namespace @ 01/16/23 06:11:24.426
  STEP: Deleting cluster md-scale-3fwejj/md-scale-ex7nxs @ 01/16/23 06:11:24.697
  STEP: Deleting cluster md-scale-ex7nxs @ 01/16/23 06:11:24.716
  INFO: Waiting for the Cluster md-scale-3fwejj/md-scale-ex7nxs to be deleted
  STEP: Waiting for cluster md-scale-ex7nxs to be deleted @ 01/16/23 06:11:24.73
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/16/23 06:11:54.753
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/16/23 06:14:36.023
  STEP: Checking all the machines controlled by quick-start-5hbuk8-md-0 are in the "<None>" failure domain @ 01/16/23 06:15:16.078
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/16/23 06:15:16.123
  STEP: Dumping logs from the "quick-start-5hbuk8" workload cluster @ 01/16/23 06:15:16.123
Failed to get logs for Machine quick-start-5hbuk8-md-0-678699f8-tjv29, Cluster quick-start-rm1zr4/quick-start-5hbuk8: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-5hbuk8-r7kz5, Cluster quick-start-rm1zr4/quick-start-5hbuk8: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-rm1zr4" namespace @ 01/16/23 06:15:20.448
  STEP: Deleting cluster quick-start-rm1zr4/quick-start-5hbuk8 @ 01/16/23 06:15:20.713
  STEP: Deleting cluster quick-start-5hbuk8 @ 01/16/23 06:15:20.729
  INFO: Waiting for the Cluster quick-start-rm1zr4/quick-start-5hbuk8 to be deleted
  STEP: Waiting for cluster quick-start-5hbuk8 to be deleted @ 01/16/23 06:15:20.742
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/16/23 06:15:50.762
... skipping 180 lines ...

Summarizing 1 Failure:
  [TIMEDOUT] Cluster creation with anti affined nodes [It] should create a cluster with anti-affined nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/anti_affinity_test.go:61

Ran 9 of 17 Specs in 3584.847 seconds
FAIL! - Suite Timeout Elapsed -- 8 Passed | 1 Failed | 1 Pending | 7 Skipped
--- FAIL: TestE2E (3584.85s)
FAIL

Ginkgo ran 1 suite in 1h0m31.933568875s

Test Suite Failed

real	60m31.951s
user	4m59.071s
sys	0m58.959s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-4a8db2ce74155e1e0389394c88a9c248dd9b35fa" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-2ca5ef8aaabffb22563f055ecbeeec96bd33a86c" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...