This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 562 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/22/23 17:28:41.723[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-50o6j8-md-0 are in the "<None>" failure domain [38;5;243m@ 01/22/23 17:30:21.842[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 17:30:21.885[0m
[1mSTEP:[0m Dumping logs from the "quick-start-50o6j8" workload cluster [38;5;243m@ 01/22/23 17:30:21.885[0m
Failed to get logs for Machine quick-start-50o6j8-jvk9k, Cluster quick-start-xv7oh4/quick-start-50o6j8: dialing host IP address at 192.168.6.11: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-50o6j8-md-0-76d8d98d8c-ljb57, Cluster quick-start-xv7oh4/quick-start-50o6j8: dialing host IP address at 192.168.6.80: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-xv7oh4" namespace [38;5;243m@ 01/22/23 17:30:24.316[0m
[1mSTEP:[0m Deleting cluster quick-start-xv7oh4/quick-start-50o6j8 [38;5;243m@ 01/22/23 17:30:24.644[0m
[1mSTEP:[0m Deleting cluster quick-start-50o6j8 [38;5;243m@ 01/22/23 17:30:24.663[0m
INFO: Waiting for the Cluster quick-start-xv7oh4/quick-start-50o6j8 to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-50o6j8 to be deleted [38;5;243m@ 01/22/23 17:30:24.677[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/22/23 17:30:54.695[0m
... skipping 106 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 17:42:47.806[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-tjubx6" workload cluster [38;5;243m@ 01/22/23 17:42:47.806[0m
Failed to get logs for Machine mhc-remediation-tjubx6-8zt6q, Cluster mhc-remediation-x4gtne/mhc-remediation-tjubx6: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-tjubx6-md-0-86b744d5f-4lvkv, Cluster mhc-remediation-x4gtne/mhc-remediation-tjubx6: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-x4gtne" namespace [38;5;243m@ 01/22/23 17:42:52.012[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-x4gtne/mhc-remediation-tjubx6 [38;5;243m@ 01/22/23 17:42:52.312[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-tjubx6 [38;5;243m@ 01/22/23 17:42:52.331[0m
INFO: Waiting for the Cluster mhc-remediation-x4gtne/mhc-remediation-tjubx6 to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-tjubx6 to be deleted [38;5;243m@ 01/22/23 17:42:52.346[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/22/23 17:43:22.365[0m
... skipping 54 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 17:52:23.636[0m
[1mSTEP:[0m Dumping logs from the "mhc-remediation-6dmua1" workload cluster [38;5;243m@ 01/22/23 17:52:23.636[0m
Failed to get logs for Machine mhc-remediation-6dmua1-bt5r7, Cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-6dmua1-md-0-6678f447c9-5pq86, Cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-6dmua1-nrfl8, Cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine mhc-remediation-6dmua1-vq2tp, Cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-n8lgt9" namespace [38;5;243m@ 01/22/23 17:52:30.941[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1 [38;5;243m@ 01/22/23 17:52:31.273[0m
[1mSTEP:[0m Deleting cluster mhc-remediation-6dmua1 [38;5;243m@ 01/22/23 17:52:31.291[0m
INFO: Waiting for the Cluster mhc-remediation-n8lgt9/mhc-remediation-6dmua1 to be deleted
[1mSTEP:[0m Waiting for cluster mhc-remediation-6dmua1 to be deleted [38;5;243m@ 01/22/23 17:52:31.306[0m
[1mSTEP:[0m Deleting namespace used for hosting the "mhc-remediation" test spec [38;5;243m@ 01/22/23 17:53:11.331[0m
... skipping 114 lines ...
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
[1mSTEP:[0m Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted [38;5;243m@ 01/22/23 18:02:37.057[0m
INFO: Removing MachineDeploymentTopology from the Cluster Topology.
INFO: Waiting for MachineDeployment to be deleted.
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 18:02:47.149[0m
[1mSTEP:[0m Dumping logs from the "clusterclass-changes-axmxmq" workload cluster [38;5;243m@ 01/22/23 18:02:47.15[0m
Failed to get logs for Machine clusterclass-changes-axmxmq-58gj4-7w6xt, Cluster clusterclass-changes-ted1v6/clusterclass-changes-axmxmq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "clusterclass-changes-ted1v6" namespace [38;5;243m@ 01/22/23 18:02:49.271[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-ted1v6/clusterclass-changes-axmxmq [38;5;243m@ 01/22/23 18:02:49.587[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-axmxmq [38;5;243m@ 01/22/23 18:02:49.605[0m
INFO: Waiting for the Cluster clusterclass-changes-ted1v6/clusterclass-changes-axmxmq to be deleted
[1mSTEP:[0m Waiting for cluster clusterclass-changes-axmxmq to be deleted [38;5;243m@ 01/22/23 18:02:49.616[0m
[1mSTEP:[0m Deleting namespace used for hosting the "clusterclass-changes" test spec [38;5;243m@ 01/22/23 18:03:09.634[0m
... skipping 56 lines ...
[1mSTEP:[0m Waiting for deployment node-drain-aju3aa-unevictable-workload/unevictable-pod-zzj to be available [38;5;243m@ 01/22/23 18:10:26.563[0m
[1mSTEP:[0m Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. [38;5;243m@ 01/22/23 18:10:36.876[0m
INFO: Scaling controlplane node-drain-aju3aa/node-drain-6m4oq4 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 18:14:17.481[0m
[1mSTEP:[0m Dumping logs from the "node-drain-6m4oq4" workload cluster [38;5;243m@ 01/22/23 18:14:17.481[0m
Failed to get logs for Machine node-drain-6m4oq4-qp9hc, Cluster node-drain-aju3aa/node-drain-6m4oq4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-aju3aa" namespace [38;5;243m@ 01/22/23 18:14:19.658[0m
[1mSTEP:[0m Deleting cluster node-drain-aju3aa/node-drain-6m4oq4 [38;5;243m@ 01/22/23 18:14:19.98[0m
[1mSTEP:[0m Deleting cluster node-drain-6m4oq4 [38;5;243m@ 01/22/23 18:14:20.002[0m
INFO: Waiting for the Cluster node-drain-aju3aa/node-drain-6m4oq4 to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-6m4oq4 to be deleted [38;5;243m@ 01/22/23 18:14:20.016[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/22/23 18:14:50.039[0m
... skipping 44 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/22/23 18:17:41.494[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-0z332g-md-0 are in the "<None>" failure domain [38;5;243m@ 01/22/23 18:18:21.553[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 01/22/23 18:18:21.6[0m
[1mSTEP:[0m Dumping logs from the "quick-start-0z332g" workload cluster [38;5;243m@ 01/22/23 18:18:21.6[0m
Failed to get logs for Machine quick-start-0z332g-md-0-54d6768cf8-qtzvj, Cluster quick-start-n5fa8f/quick-start-0z332g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-0z332g-vgn75, Cluster quick-start-n5fa8f/quick-start-0z332g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-n5fa8f" namespace [38;5;243m@ 01/22/23 18:18:26.177[0m
[1mSTEP:[0m Deleting cluster quick-start-n5fa8f/quick-start-0z332g [38;5;243m@ 01/22/23 18:18:26.482[0m
[1mSTEP:[0m Deleting cluster quick-start-0z332g [38;5;243m@ 01/22/23 18:18:26.504[0m
INFO: Waiting for the Cluster quick-start-n5fa8f/quick-start-0z332g to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-0z332g to be deleted [38;5;243m@ 01/22/23 18:18:26.518[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 01/22/23 18:18:56.539[0m
... skipping 47 lines ...
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m Scaling the MachineDeployment out to 3 [38;5;243m@ 01/22/23 18:22:27.996[0m
INFO: Scaling machine deployment md-scale-btpbe5/md-scale-sxgsho-md-0 from 1 to 3 replicas
INFO: Waiting for correct number of replicas to exist
[38;5;214m[TIMEDOUT][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71 [38;5;243m@ 01/22/23 18:22:59.27[0m
[1mSTEP:[0m Dumping logs from the "md-scale-sxgsho" workload cluster [38;5;243m@ 01/22/23 18:22:59.272[0m
Failed to get logs for Machine md-scale-sxgsho-md-0-5c5cdff5cc-fvl6q, Cluster md-scale-btpbe5/md-scale-sxgsho: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-scale-sxgsho-md-0-5c5cdff5cc-mkptm, Cluster md-scale-btpbe5/md-scale-sxgsho: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-sxgsho-md-0-5c5cdff5cc-rcvfm, Cluster md-scale-btpbe5/md-scale-sxgsho: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for Machine md-scale-sxgsho-plggr, Cluster md-scale-btpbe5/md-scale-sxgsho: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-btpbe5" namespace [38;5;243m@ 01/22/23 18:23:03.132[0m
[1mSTEP:[0m Deleting cluster md-scale-btpbe5/md-scale-sxgsho [38;5;243m@ 01/22/23 18:23:03.492[0m
[1mSTEP:[0m Deleting cluster md-scale-sxgsho [38;5;243m@ 01/22/23 18:23:03.517[0m
INFO: Waiting for the Cluster md-scale-btpbe5/md-scale-sxgsho to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-sxgsho to be deleted [38;5;243m@ 01/22/23 18:23:03.532[0m
[38;5;214m[TIMEDOUT][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:115 [38;5;243m@ 01/22/23 18:23:29.272[0m
... skipping 47 lines ...
[38;5;9m[1mSummarizing 1 Failure:[0m
[38;5;214m[TIMEDOUT][0m [0mWhen testing MachineDeployment scale out/in [38;5;214m[1m[It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71[0m
[38;5;9m[1mRan 9 of 17 Specs in 3577.658 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m8 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m7 Skipped[0m
--- FAIL: TestE2E (3577.66s)
FAIL
Ginkgo ran 1 suite in 1h0m31.913436122s
Test Suite Failed
real 60m31.935s
user 5m48.907s
sys 1m12.465s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-48ece0d9b46e74d5697475a535e389cf70b5f5bd" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-1cadc77c0415a6ffc066fb06765b540347523b98" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...