This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2019-07-17 10:34
Elapsed26m40s
Revision
Buildergke-prow-ssd-pool-1a225945-w688
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/257422d3-2a7c-4e9d-a2d6-532d45a46b49/targets/test'}}
pod5c04949c-a87e-11e9-a9b1-1a2b998c5cd0
resultstorehttps://source.cloud.google.com/results/invocations/257422d3-2a7c-4e9d-a2d6-532d45a46b49/targets/test
infra-commit20ed87078
job-versionv1.16.0-alpha.1.12+835552ecb6626e
pod5c04949c-a87e-11e9-a9b1-1a2b998c5cd0
revisionv1.16.0-alpha.1.12+835552ecb6626e

Test Failures


Up 13m29s

error creating cluster: error during gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.12+835552ecb6626e bootstrap-e2e: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 12 lines ...
I0717 10:34:47.532] process 44 exited with code 0 after 0.0m
I0717 10:34:47.533] Will upload results to gs://kubernetes-jenkins/logs using regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com
I0717 10:34:47.533] Root: /workspace
I0717 10:34:47.534] cd to /workspace
I0717 10:34:47.534] Configure environment...
I0717 10:34:47.534] Call:  git show -s --format=format:%ct HEAD
W0717 10:34:47.538] fatal: Not a git repository (or any of the parent directories): .git
I0717 10:34:47.538] process 56 exited with code 128 after 0.0m
W0717 10:34:47.539] Unable to print commit date for HEAD
I0717 10:34:47.540] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0717 10:34:48.056] Activated service account credentials for: [regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com]
I0717 10:34:48.219] process 57 exited with code 0 after 0.0m
I0717 10:34:48.219] Call:  gcloud config get-value account
... skipping 418 lines ...
W0717 10:37:24.127] WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag.
W0717 10:37:24.128] WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
W0717 10:37:24.128] WARNING: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.
W0717 10:37:24.129] This will disable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
W0717 10:37:27.957] Creating cluster bootstrap-e2e in us-central1...
W0717 10:49:48.336] .........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.
W0717 10:49:48.395] ERROR: (gcloud.beta.container.clusters.create) Operation [<Operation
W0717 10:49:48.398]  clusterConditions: []
W0717 10:49:48.398]  detail: u'All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-2f94cef8d7758001e0c3-4e42" is unhealthy.'
W0717 10:49:48.399]  endTime: u'2019-07-17T10:49:46.243552186Z'
W0717 10:49:48.400]  name: u'operation-1563359847912-42a34dcc'
W0717 10:49:48.400]  nodepoolConditions: []
W0717 10:49:48.400]  operationType: OperationTypeValueValuesEnum(CREATE_CLUSTER, 1)
... skipping 6 lines ...
W0717 10:49:48.403]  stages: []>
W0717 10:49:48.403]  selfLink: u'https://test-container.sandbox.googleapis.com/v1beta1/projects/113388727899/locations/us-central1/operations/operation-1563359847912-42a34dcc'
W0717 10:49:48.404]  startTime: u'2019-07-17T10:37:27.912221061Z'
W0717 10:49:48.405]  status: StatusValueValuesEnum(DONE, 3)
W0717 10:49:48.405]  statusMessage: u'All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-2f94cef8d7758001e0c3-4e42" is unhealthy.'
W0717 10:49:48.405]  targetLink: u'https://test-container.sandbox.googleapis.com/v1beta1/projects/113388727899/locations/us-central1/clusters/bootstrap-e2e'
W0717 10:49:48.406]  zone: u'us-central1'>] finished with error: All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-2f94cef8d7758001e0c3-4e42" is unhealthy.
W0717 10:49:48.537] 2019/07/17 10:49:48 process.go:155: Step 'gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.12+835552ecb6626e bootstrap-e2e' finished in 12m24.900462103s
W0717 10:49:49.475] 2019/07/17 10:49:49 process.go:153: Running: bash -c 
W0717 10:49:49.477] function log_dump_custom_get_instances() {
W0717 10:49:49.477]   if [[ $1 == "master" ]]; then
W0717 10:49:49.478]     return 0
W0717 10:49:49.479]   fi
... skipping 13 lines ...
I0717 10:49:49.626] No masters found?
I0717 10:49:49.627] Dumping logs from nodes locally to '/workspace/_artifacts'
I0717 10:49:49.627] Dumping logs for nodes provided by log_dump_custom_get_instances() function
W0717 10:49:51.390] WARNING: --filter : operator evaluation is changing for consistency across Google APIs.  metadata.created-by:*zones/us-central1-b/instanceGroupManagers/gke-bootstrap-e2e-default-pool-b68ec802-grp currently matches but will not match in the near future.  Run `gcloud topic filters` for details.
I0717 10:50:54.076] Changing logfiles to be world-readable for download
I0717 10:50:54.328] Changing logfiles to be world-readable for download
W0717 10:50:55.112] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:50:55.214] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:50:56.004] Changing logfiles to be world-readable for download
I0717 10:50:56.361] Changing logfiles to be world-readable for download
I0717 10:50:56.891] Changing logfiles to be world-readable for download
W0717 10:50:57.139] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:50:57.598] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:50:58.098] Changing logfiles to be world-readable for download
I0717 10:50:58.158] Changing logfiles to be world-readable for download
W0717 10:50:58.268] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:50:58.369] Changing logfiles to be world-readable for download
I0717 10:50:58.511] Changing logfiles to be world-readable for download
W0717 10:50:59.051] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:50:59.180] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:50:59.389] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:50:59.461] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:01.145] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:01.206] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:03.163] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:03.586] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:04.246] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:05.126] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:05.369] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:05.518] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:05.646] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:07.080] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:07.206] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:09.079] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:09.423] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:09.948] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:10.850] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:11.215] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:11.318] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:11.370] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:12.793] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:12.907] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:14.750] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:15.063] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:15.615] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:16.512] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:16.944] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:17.051] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:17.162] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:18.525] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:18.586] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:20.513] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:20.779] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:21.884] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:22.874] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:22.888] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:22.917] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:23.063] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:24.499] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:24.507] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:24.609] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-55a6eb75-jbsc
I0717 10:51:24.609] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4f81898c-bdrr
W0717 10:51:25.220] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:25.226] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:26.005] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:26.010] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:26.288] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:26.389] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-b68ec802-0tp4
W0717 10:51:26.541] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:26.641] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-b68ec802-lm8k
W0717 10:51:26.977] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:27.292] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:27.733] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:27.835] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-b68ec802-3lfc
W0717 10:51:27.936] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:28.040] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:28.606] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:28.818] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:28.880] ERROR: (gcloud.compute.ssh) could not parse resource []
W0717 10:51:28.950] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:29.051] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-55a6eb75-nbgf
I0717 10:51:29.051] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4f81898c-7wsv
I0717 10:51:29.064] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4f81898c-d47b
W0717 10:51:29.165] ERROR: (gcloud.compute.ssh) could not parse resource []
I0717 10:51:29.266] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-55a6eb75-b78d
W0717 10:51:29.745] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:29.944] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:29.948] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:29.977] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:30.085] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0717 10:51:30.749] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:30.825] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:30.967] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:31.031] ERROR: (gcloud.compute.scp) could not parse resource []
W0717 10:51:31.126] Project: k8s-jkns-gci-autoscaling-regio
W0717 10:51:31.126] Network Project: k8s-jkns-gci-autoscaling-regio
W0717 10:51:31.127] Zone: 
W0717 10:51:34.224] INSTANCE_GROUPS=
W0717 10:51:34.224] NODE_NAMES=
W0717 10:51:34.227] 2019/07/17 10:51:34 process.go:155: Step 'bash -c 
... skipping 21 lines ...
W0717 11:00:04.454] 2019/07/17 11:00:04 process.go:155: Step 'gcloud compute firewall-rules describe e2e-ports-b68ec802 --project=k8s-jkns-gci-autoscaling-regio --format=value(name)' finished in 828.807184ms
W0717 11:00:04.454] 2019/07/17 11:00:04 gke.go:631: Found no rules for firewall 'e2e-ports-b68ec802', assuming resources are clean
W0717 11:00:05.284] 2019/07/17 11:00:05 process.go:153: Running: gcloud compute networks delete -q bootstrap-e2e --project=k8s-jkns-gci-autoscaling-regio
W0717 11:01:18.998] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-regio/global/networks/bootstrap-e2e].
W0717 11:01:19.095] 2019/07/17 11:01:19 process.go:155: Step 'gcloud compute networks delete -q bootstrap-e2e --project=k8s-jkns-gci-autoscaling-regio' finished in 1m13.810008111s
W0717 11:01:19.095] 2019/07/17 11:01:19 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0717 11:01:19.099] 2019/07/17 11:01:19 main.go:316: Something went wrong: starting e2e cluster: error creating cluster: error during gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.12+835552ecb6626e bootstrap-e2e: exit status 1
W0717 11:01:19.102] Traceback (most recent call last):
W0717 11:01:19.102]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0717 11:01:19.104]     main(parse_args())
W0717 11:01:19.104]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0717 11:01:19.104]     mode.start(runner_args)
W0717 11:01:19.105]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0717 11:01:19.105]     check_env(env, self.command, *args)
W0717 11:01:19.105]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0717 11:01:19.105]     subprocess.check_call(cmd, env=env)
W0717 11:01:19.105]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0717 11:01:19.105]     raise CalledProcessError(retcode, cmd)
W0717 11:01:19.106] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=gke', '--provider=gke', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci-cross/latest', '--gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging', '--gcp-node-image=gci', '--gcp-project=k8s-jkns-gci-autoscaling-regio', '--gcp-region=us-central1', '--gke-command-group=beta', '--gke-environment=test', '--gke-single-zone-node-instance-group=false', '--test_args=--gce-multizone=true --ginkgo.focus=\\[Feature:ClusterSizeAutoscalingScaleUp\\]|\\[Feature:ClusterSizeAutoscalingScaleDown\\] --ginkgo.skip=\\[Flaky\\] --minStartupPods=8', '--timeout=400m')' returned non-zero exit status 1
E0717 11:01:19.116] Command failed
I0717 11:01:19.116] process 259 exited with code 1 after 26.5m
E0717 11:01:19.116] FAIL: ci-kubernetes-e2e-gci-gke-autoscaling-regional
I0717 11:01:19.118] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0717 11:01:19.635] Activated service account credentials for: [regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com]
I0717 11:01:19.689] process 3783 exited with code 0 after 0.0m
I0717 11:01:19.689] Call:  gcloud config get-value account
I0717 11:01:20.026] process 3795 exited with code 0 after 0.0m
I0717 11:01:20.027] Will upload results to gs://kubernetes-jenkins/logs using regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com
I0717 11:01:20.027] Upload result and artifacts...
I0717 11:01:20.027] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151440038236524544
I0717 11:01:20.028] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151440038236524544/artifacts
W0717 11:01:21.095] CommandException: One or more URLs matched no objects.
E0717 11:01:21.228] Command failed
I0717 11:01:21.229] process 3807 exited with code 1 after 0.0m
W0717 11:01:21.229] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151440038236524544/artifacts not exist yet
I0717 11:01:21.229] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151440038236524544/artifacts
I0717 11:01:23.136] process 3949 exited with code 0 after 0.0m
I0717 11:01:23.136] Call:  git rev-parse HEAD
W0717 11:01:23.141] fatal: Not a git repository (or any of the parent directories): .git
E0717 11:01:23.141] Command failed
I0717 11:01:23.141] process 4500 exited with code 128 after 0.0m
I0717 11:01:23.142] Call:  git rev-parse HEAD
I0717 11:01:23.153] process 4501 exited with code 0 after 0.0m
I0717 11:01:23.154] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/jobResultsCache.json
I0717 11:01:24.251] process 4502 exited with code 0 after 0.0m
I0717 11:01:24.252] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/jobResultsCache.json#1563359153160174'
... skipping 8 lines ...