This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-15 05:15
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 574 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/15/23 05:25:05.247
  STEP: Checking all the machines controlled by quick-start-ag4bg3-md-0-n46cl are in the "<None>" failure domain @ 01/15/23 05:26:35.354
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/15/23 05:26:35.393
  STEP: Dumping logs from the "quick-start-ag4bg3" workload cluster @ 01/15/23 05:26:35.393
Failed to get logs for Machine quick-start-ag4bg3-md-0-n46cl-dc687b9b9-r5d6j, Cluster quick-start-7b8cxd/quick-start-ag4bg3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-ag4bg3-n8ctw-xx49l, Cluster quick-start-7b8cxd/quick-start-ag4bg3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-7b8cxd" namespace @ 01/15/23 05:26:39.469
  STEP: Deleting cluster quick-start-7b8cxd/quick-start-ag4bg3 @ 01/15/23 05:26:39.766
  STEP: Deleting cluster quick-start-ag4bg3 @ 01/15/23 05:26:39.783
  INFO: Waiting for the Cluster quick-start-7b8cxd/quick-start-ag4bg3 to be deleted
  STEP: Waiting for cluster quick-start-ag4bg3 to be deleted @ 01/15/23 05:26:39.794
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/15/23 05:27:09.816
... skipping 44 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/15/23 05:30:01.033
  STEP: Checking all the machines controlled by quick-start-ij5iqc-md-0 are in the "<None>" failure domain @ 01/15/23 05:31:11.12
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/15/23 05:31:11.158
  STEP: Dumping logs from the "quick-start-ij5iqc" workload cluster @ 01/15/23 05:31:11.158
Failed to get logs for Machine quick-start-ij5iqc-k9vgx, Cluster quick-start-swsdur/quick-start-ij5iqc: dialing host IP address at 192.168.6.152: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-ij5iqc-md-0-748ddfbb96-s76z6, Cluster quick-start-swsdur/quick-start-ij5iqc: dialing host IP address at 192.168.6.74: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  STEP: Dumping all the Cluster API resources in the "quick-start-swsdur" namespace @ 01/15/23 05:31:13.602
  STEP: Deleting cluster quick-start-swsdur/quick-start-ij5iqc @ 01/15/23 05:31:13.879
  STEP: Deleting cluster quick-start-ij5iqc @ 01/15/23 05:31:13.896
  INFO: Waiting for the Cluster quick-start-swsdur/quick-start-ij5iqc to be deleted
  STEP: Waiting for cluster quick-start-ij5iqc to be deleted @ 01/15/23 05:31:13.914
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/15/23 05:31:43.934
... skipping 48 lines ...
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/15/23 05:34:15.309
  STEP: Checking all the machines controlled by quick-start-l6plyt-md-0 are in the "<None>" failure domain @ 01/15/23 05:35:05.372
  INFO: Waiting for the machine pools to be provisioned
  STEP: PASSED! @ 01/15/23 05:35:05.423
  STEP: Dumping logs from the "quick-start-l6plyt" workload cluster @ 01/15/23 05:35:05.423
Failed to get logs for Machine quick-start-l6plyt-kjl5z, Cluster quick-start-hiz7oz/quick-start-l6plyt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-l6plyt-md-0-8488d5cf8-vp4t5, Cluster quick-start-hiz7oz/quick-start-l6plyt: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "quick-start-hiz7oz" namespace @ 01/15/23 05:35:09.611
  STEP: Deleting cluster quick-start-hiz7oz/quick-start-l6plyt @ 01/15/23 05:35:09.956
  STEP: Deleting cluster quick-start-l6plyt @ 01/15/23 05:35:09.975
  INFO: Waiting for the Cluster quick-start-hiz7oz/quick-start-l6plyt to be deleted
  STEP: Waiting for cluster quick-start-l6plyt to be deleted @ 01/15/23 05:35:09.991
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 01/15/23 05:35:40.014
... skipping 116 lines ...
  INFO: Waiting for correct number of replicas to exist
  STEP: Scaling the MachineDeployment down to 1 @ 01/15/23 05:49:50.241
  INFO: Scaling machine deployment md-scale-qlm3bq/md-scale-yyzv3u-md-0 from 3 to 1 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: PASSED! @ 01/15/23 05:50:00.353
  STEP: Dumping logs from the "md-scale-yyzv3u" workload cluster @ 01/15/23 05:50:00.354
Failed to get logs for Machine md-scale-yyzv3u-hs4dm, Cluster md-scale-qlm3bq/md-scale-yyzv3u: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-yyzv3u-md-0-64fc4b74c5-kt6dw, Cluster md-scale-qlm3bq/md-scale-yyzv3u: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-scale-qlm3bq" namespace @ 01/15/23 05:50:04.57
  STEP: Deleting cluster md-scale-qlm3bq/md-scale-yyzv3u @ 01/15/23 05:50:04.85
  STEP: Deleting cluster md-scale-yyzv3u @ 01/15/23 05:50:04.87
  INFO: Waiting for the Cluster md-scale-qlm3bq/md-scale-yyzv3u to be deleted
  STEP: Waiting for cluster md-scale-yyzv3u to be deleted @ 01/15/23 05:50:04.883
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/15/23 05:50:34.905
... skipping 109 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/15/23 06:00:34.039
  STEP: Dumping logs from the "mhc-remediation-dwlf1n" workload cluster @ 01/15/23 06:00:34.039
Failed to get logs for Machine mhc-remediation-dwlf1n-md-0-6db4fd7c88-mptxb, Cluster mhc-remediation-sbf2hm/mhc-remediation-dwlf1n: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-dwlf1n-xfv8j, Cluster mhc-remediation-sbf2hm/mhc-remediation-dwlf1n: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-sbf2hm" namespace @ 01/15/23 06:00:38.199
  STEP: Deleting cluster mhc-remediation-sbf2hm/mhc-remediation-dwlf1n @ 01/15/23 06:00:38.495
  STEP: Deleting cluster mhc-remediation-dwlf1n @ 01/15/23 06:00:38.512
  INFO: Waiting for the Cluster mhc-remediation-sbf2hm/mhc-remediation-dwlf1n to be deleted
  STEP: Waiting for cluster mhc-remediation-dwlf1n to be deleted @ 01/15/23 06:00:38.526
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/15/23 06:01:08.55
... skipping 54 lines ...
  Patching MachineHealthCheck unhealthy condition to one of the nodes
  INFO: Patching the node condition to the node
  Waiting for remediation
  Waiting until the node with unhealthy node condition is remediated
  STEP: PASSED! @ 01/15/23 06:09:48.106
  STEP: Dumping logs from the "mhc-remediation-i0o88p" workload cluster @ 01/15/23 06:09:48.107
Failed to get logs for Machine mhc-remediation-i0o88p-md-0-86646cfd7f-vj6v6, Cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-i0o88p-vd6ns, Cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-i0o88p-z2l8b, Cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-i0o88p-zfhs7, Cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-rzxobs" namespace @ 01/15/23 06:09:55.228
  STEP: Deleting cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p @ 01/15/23 06:09:55.592
  STEP: Deleting cluster mhc-remediation-i0o88p @ 01/15/23 06:09:55.609
  INFO: Waiting for the Cluster mhc-remediation-rzxobs/mhc-remediation-i0o88p to be deleted
  STEP: Waiting for cluster mhc-remediation-i0o88p to be deleted @ 01/15/23 06:09:55.622
  STEP: Deleting namespace used for hosting the "mhc-remediation" test spec @ 01/15/23 06:10:45.654
... skipping 112 lines ...
  INFO: Waiting for rolling upgrade to start.
  INFO: Waiting for MachineDeployment rolling upgrade to start
  INFO: Waiting for rolling upgrade to complete.
  INFO: Waiting for MachineDeployment rolling upgrade to complete
  [TIMEDOUT] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71 @ 01/15/23 06:19:51.075
  STEP: Dumping logs from the "md-rollout-32sv7l" workload cluster @ 01/15/23 06:19:51.076
Failed to get logs for Machine md-rollout-32sv7l-clsck, Cluster md-rollout-6yfgw8/md-rollout-32sv7l: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-32sv7l-md-0-7b77486fbc-bgcwk, Cluster md-rollout-6yfgw8/md-rollout-32sv7l: dialing host IP address at 192.168.6.68: dial tcp 192.168.6.68:22: connect: no route to host
Failed to get logs for Machine md-rollout-32sv7l-md-0-c5d57b9b5-qv74x, Cluster md-rollout-6yfgw8/md-rollout-32sv7l: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
  STEP: Dumping all the Cluster API resources in the "md-rollout-6yfgw8" namespace @ 01/15/23 06:19:58.273
  STEP: Deleting cluster md-rollout-6yfgw8/md-rollout-32sv7l @ 01/15/23 06:19:58.668
  STEP: Deleting cluster md-rollout-32sv7l @ 01/15/23 06:19:58.691
  INFO: Waiting for the Cluster md-rollout-6yfgw8/md-rollout-32sv7l to be deleted
  STEP: Waiting for cluster md-rollout-32sv7l to be deleted @ 01/15/23 06:19:58.704
  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:103 @ 01/15/23 06:20:21.078
... skipping 67 lines ...

Summarizing 1 Failure:
  [TIMEDOUT] ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_rollout.go:71

Ran 10 of 17 Specs in 3578.714 seconds
FAIL! - Suite Timeout Elapsed -- 9 Passed | 1 Failed | 1 Pending | 6 Skipped
--- FAIL: TestE2E (3578.71s)
FAIL

Ginkgo ran 1 suite in 1h0m31.934649766s

Test Suite Failed

real	60m31.959s
user	5m40.371s
sys	1m9.940s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-d14e399b95fe75f818e9455a349bdc9dcb7f4ed5" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-81afd0ba9a71f11dcbb72e77343d7c3df3bccb32" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...