This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-20 13:30
Elapsed38m31s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 1076 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={'path': '/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={'path': '/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=69   changed=52   unreachable=0    failed=0    skipped=86   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 777 lines ...
# Wait for the kubeconfig to become available.
timeout 5m bash -c "while ! kubectl get secrets | grep test1-kubeconfig; do sleep 1; done"
test1-kubeconfig            Opaque                                1      0s
# Get kubeconfig and store it locally.
kubectl get secrets test1-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout 15m bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep master; do sleep 1; done"
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
error: the server doesn't have a resource type "nodes"
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
... skipping 53 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:52:36]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:52:46]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:52:57]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:07]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:17]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:28]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:38]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:48]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:53:59]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:54:09]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:54:19]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:54:30]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:54:40]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:54:50]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:01]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:11]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:21]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:32]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:42]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:55:52]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:03]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:13]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:24]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:34]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:44]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 2 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[13:56:55]'
... skipping 37 lines ...
INFO: Invocation ID: 11348401-fd19-41e7-80fd-73fa89019326
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: 3 targets (4 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 20 targets configured)
... skipping 3 lines ...
Analyzing: 3 targets (1457 packages loaded, 13911 targets configured)
Analyzing: 3 targets (1966 packages loaded, 16336 targets configured)
Analyzing: 3 targets (1966 packages loaded, 16336 targets configured)
Analyzing: 3 targets (1966 packages loaded, 16336 targets configured)
Analyzing: 3 targets (1967 packages loaded, 16455 targets configured)
Analyzing: 3 targets (1967 packages loaded, 16455 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages aliases (aliases.go) and complexnums (complexnums.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages issue25301 (issue25301.go) and a (a.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
... skipping 120 lines ...
+ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-12-04T07:23:47Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
+ echo 'deployed cluster:'
deployed cluster:
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig version
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ echo ''

+ kubectl --context=kind-clusterapi get clusters,gcpclusters,machines,gcpmachines,kubeadmconfigs,machinedeployments,gcpmachinetemplates,kubeadmconfigtemplates,machinesets,kubeadmcontrolplanes --all-namespaces -o yaml
+ echo 'images in docker'
+ docker images
... skipping 4 lines ...
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
+ echo 'images in deployed cluster using kubectl CLI'
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig get pods --all-namespaces -o json
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ kubectl cluster-info dump
+ echo '=== gcloud compute instances list ==='
+ gcloud compute instances list --project k8s-gce-dg-1-6-1-5-dwngr-clu
+ echo '=== cluster-info dump ==='
+ kubectl --kubeconfig=/home/prow/go/src/k8s.io/kubernetes/kubeconfig cluster-info dump
error: stat /home/prow/go/src/k8s.io/kubernetes/kubeconfig: no such file or directory
+ true
+ kind export logs --name=clusterapi /logs/artifacts/logs
Exported logs for cluster "clusterapi" to:
/logs/artifacts/logs
++ gcloud compute instances list '--filter=zone~'\''us-east4-.*'\''' --project k8s-gce-dg-1-6-1-5-dwngr-clu '--format=value(name)'
+ for node_name in $(gcloud compute instances list --filter="zone~'${GCP_REGION}-.*'" --project "${GCP_PROJECT}" --format='value(name)')
... skipping 44 lines ...
+ gcloud compute scp --recurse --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d:/var/log/cloud-init.log test1-md-0-hgd6d:/var/log/cloud-init-output.log test1-md-0-hgd6d:/var/log/pods test1-md-0-hgd6d:/var/log/containers /logs/artifacts/logs/test1-md-0-hgd6d
+ ssh-to-node test1-md-0-hgd6d us-east4-a 'sudo journalctl --output=short-precise -k'
+ local node=test1-md-0-hgd6d
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-hgd6d
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-md-0-hgd6d us-east4-a 'sudo journalctl --output=short-precise'
+ local node=test1-md-0-hgd6d
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-hgd6d
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-md-0-hgd6d us-east4-a 'sudo crictl version && sudo crictl info'
+ local node=test1-md-0-hgd6d
+ local zone=us-east4-a
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-hgd6d
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-md-0-hgd6d us-east4-a 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-md-0-hgd6d
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-hgd6d
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-md-0-hgd6d us-east4-a 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-md-0-hgd6d
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-hgd6d
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-hgd6d --command 'echo test > /dev/null'
+ break
... skipping 21 lines ...
+ gcloud compute scp --recurse --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz:/var/log/cloud-init.log test1-md-0-nt7qz:/var/log/cloud-init-output.log test1-md-0-nt7qz:/var/log/pods test1-md-0-nt7qz:/var/log/containers /logs/artifacts/logs/test1-md-0-nt7qz
+ ssh-to-node test1-md-0-nt7qz us-east4-a 'sudo journalctl --output=short-precise -k'
+ local node=test1-md-0-nt7qz
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-nt7qz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-md-0-nt7qz us-east4-a 'sudo journalctl --output=short-precise'
+ local node=test1-md-0-nt7qz
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-nt7qz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-md-0-nt7qz us-east4-a 'sudo crictl version && sudo crictl info'
+ local node=test1-md-0-nt7qz
+ local zone=us-east4-a
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-nt7qz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-md-0-nt7qz us-east4-a 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-md-0-nt7qz
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-nt7qz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-md-0-nt7qz us-east4-a 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-md-0-nt7qz
+ local zone=us-east4-a
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-a test1-md-0-nt7qz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-a test1-md-0-nt7qz --command 'echo test > /dev/null'
+ break
... skipping 21 lines ...
+ gcloud compute scp --recurse --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz:/var/log/cloud-init.log test1-control-plane-r74kz:/var/log/cloud-init-output.log test1-control-plane-r74kz:/var/log/pods test1-control-plane-r74kz:/var/log/containers /logs/artifacts/logs/test1-control-plane-r74kz
+ ssh-to-node test1-control-plane-r74kz us-east4-c 'sudo journalctl --output=short-precise -k'
+ local node=test1-control-plane-r74kz
+ local zone=us-east4-c
+ local 'cmd=sudo journalctl --output=short-precise -k'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-c test1-control-plane-r74kz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'sudo journalctl --output=short-precise -k'
+ ssh-to-node test1-control-plane-r74kz us-east4-c 'sudo journalctl --output=short-precise'
+ local node=test1-control-plane-r74kz
+ local zone=us-east4-c
+ local 'cmd=sudo journalctl --output=short-precise'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-c test1-control-plane-r74kz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'sudo journalctl --output=short-precise'
+ ssh-to-node test1-control-plane-r74kz us-east4-c 'sudo crictl version && sudo crictl info'
+ local node=test1-control-plane-r74kz
+ local zone=us-east4-c
+ local 'cmd=sudo crictl version && sudo crictl info'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-c test1-control-plane-r74kz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'sudo crictl version && sudo crictl info'
+ ssh-to-node test1-control-plane-r74kz us-east4-c 'sudo journalctl --no-pager -u kubelet.service'
+ local node=test1-control-plane-r74kz
+ local zone=us-east4-c
+ local 'cmd=sudo journalctl --no-pager -u kubelet.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-c test1-control-plane-r74kz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'sudo journalctl --no-pager -u kubelet.service'
+ ssh-to-node test1-control-plane-r74kz us-east4-c 'sudo journalctl --no-pager -u containerd.service'
+ local node=test1-control-plane-r74kz
+ local zone=us-east4-c
+ local 'cmd=sudo journalctl --no-pager -u containerd.service'
+ gcloud compute --project k8s-gce-dg-1-6-1-5-dwngr-clu instances add-access-config --zone us-east4-c test1-control-plane-r74kz
ERROR: (gcloud.compute.instances.add-access-config) Could not fetch resource:
 - At most one access config currently supported.

+ true
+ for try in {1..5}
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'echo test > /dev/null'
+ break
+ gcloud compute ssh '--ssh-flag=-o LogLevel=quiet' '--ssh-flag=-o ConnectTimeout=30' --project k8s-gce-dg-1-6-1-5-dwngr-clu --zone us-east4-c test1-control-plane-r74kz --command 'sudo journalctl --no-pager -u containerd.service'
+ gcloud logging read --order=asc '--format=table(timestamp,jsonPayload.resource.name,jsonPayload.event_subtype)' --project k8s-gce-dg-1-6-1-5-dwngr-clu 'timestamp >= "2020-10-20T13:30:33Z"'
+ cleanup
+ [[ true = true ]]
+ timeout 600 kubectl delete cluster test1
cluster.cluster.x-k8s.io "test1" deleted
+ timeout 600 kubectl wait --for=delete cluster/test1
Error from server (NotFound): clusters.cluster.x-k8s.io "test1" not found
+ true
+ make kind-reset
make: *** No rule to make target 'kind-reset'.  Stop.
+ true
++ go env GOPATH
+ cd /home/prow/go/src/k8s.io/kubernetes
+ rm -f _output/bin/e2e.test
+ gcloud compute forwarding-rules delete --project k8s-gce-dg-1-6-1-5-dwngr-clu --global test1-apiserver --quiet
ERROR: (gcloud.compute.forwarding-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-gce-dg-1-6-1-5-dwngr-clu/global/forwardingRules/test1-apiserver' was not found

+ true
+ gcloud compute target-tcp-proxies delete --project k8s-gce-dg-1-6-1-5-dwngr-clu test1-apiserver --quiet
ERROR: (gcloud.compute.target-tcp-proxies.delete) Some requests did not succeed:
 - The resource 'projects/k8s-gce-dg-1-6-1-5-dwngr-clu/global/targetTcpProxies/test1-apiserver' was not found

+ true
+ gcloud compute backend-services delete --project k8s-gce-dg-1-6-1-5-dwngr-clu --global test1-apiserver --quiet
ERROR: (gcloud.compute.backend-services.delete) Some requests did not succeed:
 - The resource 'projects/k8s-gce-dg-1-6-1-5-dwngr-clu/global/backendServices/test1-apiserver' was not found

+ true
+ gcloud compute health-checks delete --project k8s-gce-dg-1-6-1-5-dwngr-clu test1-apiserver --quiet
ERROR: (gcloud.compute.health-checks.delete) Could not fetch resource:
 - The resource 'projects/k8s-gce-dg-1-6-1-5-dwngr-clu/global/healthChecks/test1-apiserver' was not found

+ true
+ gcloud compute instances list --project k8s-gce-dg-1-6-1-5-dwngr-clu
+ grep test1
+ awk '{print "gcloud compute instances delete --project k8s-gce-dg-1-6-1-5-dwngr-clu --quiet " $1 " --zone " $2 "\n"}'
... skipping 55 lines ...