This job view page is being replaced by Spyglass soon. Check out the new job view.
PRchewong: 🏃 support MachinePool clusters in ci-entrypoint.sh
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-05-30 00:47
Elapsed52m9s
Revisionccfceadf5570b7bd1f11f15c3fd94440b9ad1565
Refs 659
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c4865424-fd56-4f83-91d7-1c10cb420031/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/c4865424-fd56-4f83-91d7-1c10cb420031/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 1045 lines ...
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-gfkahm-kubeconfig; do sleep 1; done"
capz-gfkahm-kubeconfig   Opaque                                1      0s
# Get kubeconfig and store it locally.
kubectl get secrets capz-gfkahm-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep master; do sleep 1; done"
Unable to connect to the server: dial tcp 40.74.201.95:6443: i/o timeout
error: the server doesn't have a resource type "nodes"
capz-gfkahm-control-plane-q2qrc   NotReady   master   4s    v1.19.0-beta.0.293+ae1103726f9aea
# Deploy calico
kubectl --kubeconfig=./kubeconfig apply -f templates/addons/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
... skipping 17 lines ...
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
Waiting for 1 control plane machine and 2 worker machine to become Ready
cluster.cluster.x-k8s.io "capz-gfkahm" deleted
error: timed out waiting for the condition on clusters/capz-gfkahm
kind delete cluster --name=capz || true
Deleting cluster "capz" ...
+ EXIT_VALUE=124
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 6 lines ...