This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-19 09:24
Elapsed40m13s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 1074 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={'path': '/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={'path': '/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=69   changed=52   unreachable=0    failed=0    skipped=86   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 779 lines ...
# Wait for the kubeconfig to become available.
timeout 5m bash -c "while ! kubectl get secrets | grep test1-kubeconfig; do sleep 1; done"
test1-kubeconfig            Opaque                                1      0s
# Get kubeconfig and store it locally.
kubectl get secrets test1-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout 15m bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep master; do sleep 1; done"
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
... skipping 46 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:47:52]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:02]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:12]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:22]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:33]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:43]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:48:53]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:04]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:14]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:24]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:34]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:45]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:49:55]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:05]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:16]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:26]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:36]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:47]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:50:57]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:07]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:18]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:28]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:38]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:48]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:51:59]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:52:09]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:52:19]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:52:29]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:52:40]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 2 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[09:52:50]'
... skipping 36 lines ...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: cc30a56f-2a08-4105-8a2c-403d5c230f42
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: 3 targets (4 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 20 targets configured)
Analyzing: 3 targets (16 packages loaded, 20 targets configured)
... skipping 2 lines ...
Analyzing: 3 targets (745 packages loaded, 10026 targets configured)
Analyzing: 3 targets (1778 packages loaded, 14951 targets configured)
Analyzing: 3 targets (1966 packages loaded, 16336 targets configured)
Analyzing: 3 targets (1966 packages loaded, 16336 targets configured)
Analyzing: 3 targets (1967 packages loaded, 16455 targets configured)
Analyzing: 3 targets (1967 packages loaded, 16455 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages escapeinfo (escapeinfo.go) and lib (issue27856.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages p (issue15920.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
... skipping 120 lines ...
+ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-12-04T07:23:47Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
+ echo 'deployed cluster:'
deployed cluster:
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig version
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ echo ''

+ kubectl --context=kind-clusterapi get clusters,gcpclusters,machines,gcpmachines,kubeadmconfigs,machinedeployments,gcpmachinetemplates,kubeadmconfigtemplates,machinesets,kubeadmcontrolplanes --all-namespaces -o yaml
+ echo 'images in docker'
+ docker images
... skipping 4 lines ...
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
+ echo 'images in deployed cluster using kubectl CLI'
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig get pods --all-namespaces -o json
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ kubectl cluster-info dump
+ echo '=== gcloud compute instances list ==='
+ gcloud compute instances list --project k8s-jnks-gci-autoscaling
+ echo '=== cluster-info dump ==='
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig cluster-info dump
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ kind export logs --name=clusterapi /logs/artifacts/logs
Exported logs for cluster "clusterapi" to:
/logs/artifacts/logs
++ gcloud compute instances list '--filter=zone~'\''us-east4-.*'\''' --project k8s-jnks-gci-autoscaling '--format=value(name)'
+ for node_name in $(gcloud compute instances list --filter="zone~'${GCP_REGION}-.*'" --project "${GCP_PROJECT}" --format='value(name)')
... skipping 44 lines ...
+ gcloud compute scp --recurse --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh:/var/log/cloud-init.log test1-md-0-66zhh:/var/log/cloud-init-output.log test1-md-0-66zhh:/var/log/pods test1-md-0-66zhh:/var/log/containers /logs/artifacts/logs/test1-md-0-66zhh
+ ssh-to-node test1-md-0-66zhh us-east4-a 'sudo journalctl --output=short-precise -k'
+ local node=test1-md-0-66zhh
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-66zhh
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-md-0-66zhh us-east4-a 'sudo journalctl --output=short-precise'
+ local node=test1-md-0-66zhh
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-66zhh
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-md-0-66zhh us-east4-a 'sudo crictl version && sudo crictl info'
+ local node=test1-md-0-66zhh
+ local zone=us-east4-a
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-66zhh
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-md-0-66zhh us-east4-a 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-md-0-66zhh
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-66zhh
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-md-0-66zhh us-east4-a 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-md-0-66zhh
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-66zhh
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-66zhh --command 'echo test > /dev/null'
+ break
... skipping 21 lines ...
+ gcloud compute scp --recurse --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9:/var/log/cloud-init.log test1-md-0-8d8x9:/var/log/cloud-init-output.log test1-md-0-8d8x9:/var/log/pods test1-md-0-8d8x9:/var/log/containers /logs/artifacts/logs/test1-md-0-8d8x9
+ ssh-to-node test1-md-0-8d8x9 us-east4-a 'sudo journalctl --output=short-precise -k'
+ local node=test1-md-0-8d8x9
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-8d8x9
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-md-0-8d8x9 us-east4-a 'sudo journalctl --output=short-precise'
+ local node=test1-md-0-8d8x9
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-8d8x9
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-md-0-8d8x9 us-east4-a 'sudo crictl version && sudo crictl info'
+ local node=test1-md-0-8d8x9
+ local zone=us-east4-a
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-8d8x9
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-md-0-8d8x9 us-east4-a 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-md-0-8d8x9
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-8d8x9
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-md-0-8d8x9 us-east4-a 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-md-0-8d8x9
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-a test1-md-0-8d8x9
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-a test1-md-0-8d8x9 --command 'echo test > /dev/null'
+ break
... skipping 21 lines ...
+ gcloud compute scp --recurse --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g:/var/log/cloud-init.log test1-control-plane-hrw4g:/var/log/cloud-init-output.log test1-control-plane-hrw4g:/var/log/pods test1-control-plane-hrw4g:/var/log/containers /logs/artifacts/logs/test1-control-plane-hrw4g
+ ssh-to-node test1-control-plane-hrw4g us-east4-b 'sudo journalctl --output=short-precise -k'
+ local node=test1-control-plane-hrw4g
+ local zone=us-east4-b
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-b test1-control-plane-hrw4g
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-control-plane-hrw4g us-east4-b 'sudo journalctl --output=short-precise'
+ local node=test1-control-plane-hrw4g
+ local zone=us-east4-b
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-b test1-control-plane-hrw4g
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-control-plane-hrw4g us-east4-b 'sudo crictl version && sudo crictl info'
+ local node=test1-control-plane-hrw4g
+ local zone=us-east4-b
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-b test1-control-plane-hrw4g
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-control-plane-hrw4g us-east4-b 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-control-plane-hrw4g
+ local zone=us-east4-b
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-b test1-control-plane-hrw4g
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-control-plane-hrw4g us-east4-b 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-control-plane-hrw4g
+ local zone=us-east4-b
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-jnks-gci-autoscaling instances add-access-config --zone us-east4-b test1-control-plane-hrw4g
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-jnks-gci-autoscaling --zone us-east4-b test1-control-plane-hrw4g --command 'sudo journalctl --no-pager -u containerd.service'
+ gcloud logging read --order=asc '--format=table(timestamp,jsonPayload.resource.name,jsonPayload.event_subtype)' --project k8s-jnks-gci-autoscaling 'timestamp >= "2020-10-19T09:24:14Z"'
+ cleanup
+ [[ true = true ]]
+ timeout 600 kubectl delete cluster test1
cluster.cluster.x-k8s.io "test1" deleted
+ timeout 600 kubectl wait --for=delete cluster/test1
Error from server (NotFound): clusters.cluster.x-k8s.io "test1" not found
+ true
+ make kind-reset
make: *** No rule to make target 'kind-reset'.  Stop.
+ true
++ go env GOPATH
+ cd /home/prow/go/src/k8s.io/kubernetes
+ rm -f _output/bin/e2e.test
+ gcloud compute forwarding-rules delete --project k8s-jnks-gci-autoscaling --global test1-apiserver --quiet
ERROR: (gcloud.compute.forwarding-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-jnks-gci-autoscaling/global/forwardingRules/test1-apiserver' was not found

+ true
+ gcloud compute target-tcp-proxies delete --project k8s-jnks-gci-autoscaling test1-apiserver --quiet
ERROR: (gcloud.compute.target-tcp-proxies.delete) Some requests did not succeed:
 - The resource 'projects/k8s-jnks-gci-autoscaling/global/targetTcpProxies/test1-apiserver' was not found

+ true
+ gcloud compute backend-services delete --project k8s-jnks-gci-autoscaling --global test1-apiserver --quiet
ERROR: (gcloud.compute.backend-services.delete) Some requests did not succeed:
 - The resource 'projects/k8s-jnks-gci-autoscaling/global/backendServices/test1-apiserver' was not found

+ true
+ gcloud compute health-checks delete --project k8s-jnks-gci-autoscaling test1-apiserver --quiet
ERROR: (gcloud.compute.health-checks.delete) Could not fetch resource:
 - The resource 'projects/k8s-jnks-gci-autoscaling/global/healthChecks/test1-apiserver' was not found

+ true
+ gcloud compute instances list --project k8s-jnks-gci-autoscaling
+ grep test1
+ awk '{print "gcloud compute instances delete --project k8s-jnks-gci-autoscaling --quiet " $1 " --zone " $2 "\n"}'
... skipping 55 lines ...