This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2019-07-18 00:10
Elapsed24m23s
Revision
Buildergke-prow-ssd-pool-1a225945-810g
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/400d7f98-36e7-4be5-b63d-b5bef2a95286/targets/test'}}
pod4dda8c99-a8f0-11e9-97aa-4a95e4f58ae7
resultstorehttps://source.cloud.google.com/results/invocations/400d7f98-36e7-4be5-b63d-b5bef2a95286/targets/test
infra-commita4125ca52
job-versionv1.16.0-alpha.1.20+5db091dde4d7de
pod4dda8c99-a8f0-11e9-97aa-4a95e4f58ae7
revisionv1.16.0-alpha.1.20+5db091dde4d7de

Test Failures


Up 13m50s

error creating cluster: error during gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.20+5db091dde4d7de bootstrap-e2e: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 12 lines ...
I0718 00:10:20.515] process 44 exited with code 0 after 0.0m
I0718 00:10:20.515] Will upload results to gs://kubernetes-jenkins/logs using regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com
I0718 00:10:20.516] Root: /workspace
I0718 00:10:20.516] cd to /workspace
I0718 00:10:20.516] Configure environment...
I0718 00:10:20.517] Call:  git show -s --format=format:%ct HEAD
W0718 00:10:20.521] fatal: Not a git repository (or any of the parent directories): .git
I0718 00:10:20.522] process 56 exited with code 128 after 0.0m
W0718 00:10:20.522] Unable to print commit date for HEAD
I0718 00:10:20.523] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0718 00:10:21.192] Activated service account credentials for: [regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com]
I0718 00:10:21.612] process 57 exited with code 0 after 0.0m
I0718 00:10:21.612] Call:  gcloud config get-value account
... skipping 418 lines ...
W0718 00:13:14.397] WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag.
W0718 00:13:14.398] WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
W0718 00:13:14.399] WARNING: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.
W0718 00:13:14.400] This will disable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
W0718 00:13:18.070] Creating cluster bootstrap-e2e in us-central1...
W0718 00:26:03.764] .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.
W0718 00:26:03.899] ERROR: (gcloud.beta.container.clusters.create) Operation [<Operation
W0718 00:26:03.900]  clusterConditions: []
W0718 00:26:03.900]  detail: u'All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-c98d1fa48be130147d98-d39b" is unhealthy.'
W0718 00:26:03.901]  endTime: u'2019-07-18T00:26:02.429679322Z'
W0718 00:26:03.901]  name: u'operation-1563408798004-89a1df86'
W0718 00:26:03.901]  nodepoolConditions: []
W0718 00:26:03.901]  operationType: OperationTypeValueValuesEnum(CREATE_CLUSTER, 1)
... skipping 6 lines ...
W0718 00:26:03.902]  stages: []>
W0718 00:26:03.903]  selfLink: u'https://test-container.sandbox.googleapis.com/v1beta1/projects/113388727899/locations/us-central1/operations/operation-1563408798004-89a1df86'
W0718 00:26:03.903]  startTime: u'2019-07-18T00:13:18.004397901Z'
W0718 00:26:03.903]  status: StatusValueValuesEnum(DONE, 3)
W0718 00:26:03.903]  statusMessage: u'All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-c98d1fa48be130147d98-d39b" is unhealthy.'
W0718 00:26:03.903]  targetLink: u'https://test-container.sandbox.googleapis.com/v1beta1/projects/113388727899/locations/us-central1/clusters/bootstrap-e2e'
W0718 00:26:03.904]  zone: u'us-central1'>] finished with error: All cluster resources were brought up, but the cluster API is reporting that: component "kube-apiserver" from endpoint "gke-c98d1fa48be130147d98-d39b" is unhealthy.
W0718 00:26:04.015] 2019/07/18 00:26:04 process.go:155: Step 'gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.20+5db091dde4d7de bootstrap-e2e' finished in 12m50.457043825s
W0718 00:26:05.129] 2019/07/18 00:26:05 process.go:153: Running: bash -c 
W0718 00:26:05.130] function log_dump_custom_get_instances() {
W0718 00:26:05.131]   if [[ $1 == "master" ]]; then
W0718 00:26:05.131]     return 0
W0718 00:26:05.131]   fi
... skipping 12 lines ...
I0718 00:26:05.279] Dumping logs from master locally to '/workspace/_artifacts'
I0718 00:26:05.282] No masters found?
I0718 00:26:05.283] Dumping logs from nodes locally to '/workspace/_artifacts'
I0718 00:26:05.283] Dumping logs for nodes provided by log_dump_custom_get_instances() function
W0718 00:26:06.996] WARNING: --filter : operator evaluation is changing for consistency across Google APIs.  metadata.created-by:*zones/us-central1-b/instanceGroupManagers/gke-bootstrap-e2e-default-pool-4843f119-grp currently matches but will not match in the near future.  Run `gcloud topic filters` for details.
I0718 00:27:15.469] Changing logfiles to be world-readable for download
W0718 00:27:17.120] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:19.510] Changing logfiles to be world-readable for download
I0718 00:27:20.201] Changing logfiles to be world-readable for download
W0718 00:27:21.237] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:21.610] Changing logfiles to be world-readable for download
W0718 00:27:22.137] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:23.006] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:24.052] Changing logfiles to be world-readable for download
W0718 00:27:24.154] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:24.445] Changing logfiles to be world-readable for download
W0718 00:27:25.116] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:25.561] Changing logfiles to be world-readable for download
I0718 00:27:25.703] Changing logfiles to be world-readable for download
W0718 00:27:25.804] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:26.117] Changing logfiles to be world-readable for download
W0718 00:27:26.380] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:26.619] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:27.125] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:27.407] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:28.103] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:28.899] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:30.160] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:30.973] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:31.582] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:32.234] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:32.540] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:33.016] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:33.237] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:33.885] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:34.774] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:36.030] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:36.920] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:38.046] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:38.645] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:39.054] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:39.564] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:39.763] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:40.136] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:41.150] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:42.067] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:43.164] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:44.352] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:45.574] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:46.265] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:46.820] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:46.958] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:47.091] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:47.694] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:48.422] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:48.525] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-7d63328e-x2q0
W0718 00:27:49.551] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:49.713] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:50.534] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:50.685] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:51.688] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:52.386] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:53.187] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:53.273] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:53.375] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4843f119-s9xn
I0718 00:27:53.389] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-90da0ed4-pgzv
W0718 00:27:53.491] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:53.988] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:54.090] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-7d63328e-tlf7
W0718 00:27:54.253] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:54.348] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:54.989] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:55.238] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:55.278] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:55.628] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:55.734] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4843f119-c72w
W0718 00:27:56.252] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:56.750] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:56.854] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-7d63328e-m18c
W0718 00:27:56.956] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:57.712] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:57.817] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-90da0ed4-h28j
W0718 00:27:57.918] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:58.036] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:58.482] ERROR: (gcloud.compute.ssh) could not parse resource []
I0718 00:27:58.584] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-4843f119-wfxd
W0718 00:27:58.715] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:27:58.982] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:27:59.401] ERROR: (gcloud.compute.ssh) could not parse resource []
W0718 00:27:59.431] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
I0718 00:27:59.532] Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from gke-bootstrap-e2e-default-pool-90da0ed4-5tsf
W0718 00:27:59.728] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:28:00.382] ERROR: (gcloud.compute.instances.get-serial-port-output) could not parse resource []
W0718 00:28:00.444] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:28:01.380] ERROR: (gcloud.compute.scp) could not parse resource []
W0718 00:28:01.478] Project: k8s-jkns-gci-autoscaling-regio
W0718 00:28:01.479] Network Project: k8s-jkns-gci-autoscaling-regio
W0718 00:28:01.479] Zone: 
W0718 00:28:05.046] INSTANCE_GROUPS=
W0718 00:28:05.047] NODE_NAMES=
W0718 00:28:05.047] 2019/07/18 00:28:05 process.go:155: Step 'bash -c 
... skipping 21 lines ...
W0718 00:33:24.390] 2019/07/18 00:33:24 process.go:155: Step 'gcloud compute firewall-rules describe e2e-ports-4843f119 --project=k8s-jkns-gci-autoscaling-regio --format=value(name)' finished in 941.286139ms
W0718 00:33:24.391] 2019/07/18 00:33:24 gke.go:631: Found no rules for firewall 'e2e-ports-4843f119', assuming resources are clean
W0718 00:33:25.259] 2019/07/18 00:33:25 process.go:153: Running: gcloud compute networks delete -q bootstrap-e2e --project=k8s-jkns-gci-autoscaling-regio
W0718 00:34:29.186] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-regio/global/networks/bootstrap-e2e].
W0718 00:34:29.437] 2019/07/18 00:34:29 process.go:155: Step 'gcloud compute networks delete -q bootstrap-e2e --project=k8s-jkns-gci-autoscaling-regio' finished in 1m4.182222287s
W0718 00:34:29.439] 2019/07/18 00:34:29 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0718 00:34:29.440] 2019/07/18 00:34:29 main.go:316: Something went wrong: starting e2e cluster: error creating cluster: error during gcloud beta container clusters create --quiet --project=k8s-jkns-gci-autoscaling-regio --region=us-central1 --machine-type=n1-standard-2 --image-type=gci --num-nodes=3 --network=bootstrap-e2e --cluster-version=1.16.0-alpha.1.20+5db091dde4d7de bootstrap-e2e: exit status 1
W0718 00:34:29.452] Traceback (most recent call last):
W0718 00:34:29.453]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0718 00:34:29.453]     main(parse_args())
W0718 00:34:29.454]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0718 00:34:29.454]     mode.start(runner_args)
W0718 00:34:29.454]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0718 00:34:29.455]     check_env(env, self.command, *args)
W0718 00:34:29.455]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0718 00:34:29.456]     subprocess.check_call(cmd, env=env)
W0718 00:34:29.456]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0718 00:34:29.457]     raise CalledProcessError(retcode, cmd)
W0718 00:34:29.458] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=gke', '--provider=gke', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci-cross/latest', '--gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging', '--gcp-node-image=gci', '--gcp-project=k8s-jkns-gci-autoscaling-regio', '--gcp-region=us-central1', '--gke-command-group=beta', '--gke-environment=test', '--gke-single-zone-node-instance-group=false', '--test_args=--gce-multizone=true --ginkgo.focus=\\[Feature:ClusterSizeAutoscalingScaleUp\\]|\\[Feature:ClusterSizeAutoscalingScaleDown\\] --ginkgo.skip=\\[Flaky\\] --minStartupPods=8', '--timeout=400m')' returned non-zero exit status 1
E0718 00:34:29.463] Command failed
I0718 00:34:29.463] process 259 exited with code 1 after 24.1m
E0718 00:34:29.463] FAIL: ci-kubernetes-e2e-gci-gke-autoscaling-regional
I0718 00:34:29.464] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0718 00:34:30.305] Activated service account credentials for: [regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com]
I0718 00:34:30.395] process 3783 exited with code 0 after 0.0m
I0718 00:34:30.396] Call:  gcloud config get-value account
I0718 00:34:30.884] process 3795 exited with code 0 after 0.0m
I0718 00:34:30.884] Will upload results to gs://kubernetes-jenkins/logs using regional-e2e-test@k8s-jkns-e2e-regional.iam.gserviceaccount.com
I0718 00:34:30.885] Upload result and artifacts...
I0718 00:34:30.885] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151645248020025344
I0718 00:34:30.886] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151645248020025344/artifacts
W0718 00:34:32.277] CommandException: One or more URLs matched no objects.
E0718 00:34:32.417] Command failed
I0718 00:34:32.418] process 3807 exited with code 1 after 0.0m
W0718 00:34:32.418] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151645248020025344/artifacts not exist yet
I0718 00:34:32.418] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/1151645248020025344/artifacts
I0718 00:34:35.740] process 3949 exited with code 0 after 0.1m
I0718 00:34:35.742] Call:  git rev-parse HEAD
W0718 00:34:35.751] fatal: Not a git repository (or any of the parent directories): .git
E0718 00:34:35.756] Command failed
I0718 00:34:35.757] process 4500 exited with code 128 after 0.0m
I0718 00:34:35.759] Call:  git rev-parse HEAD
I0718 00:34:35.775] process 4501 exited with code 0 after 0.0m
I0718 00:34:35.777] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/jobResultsCache.json
I0718 00:34:37.657] process 4502 exited with code 0 after 0.0m
I0718 00:34:37.658] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling-regional/jobResultsCache.json#1563408358584705'
... skipping 8 lines ...