This job view page is being replaced by Spyglass soon. Check out the new job view.
PRlosipiuk: Allow relaxing deleted pods checking in RC runner
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-09-19 18:28
Elapsed39m13s
Revision
Buildergke-prow-ssd-pool-1a225945-03kr
Refs master:63ca9f61
82029:c2e4fee8
pod04df390f-db0b-11e9-b559-260d2af1bc04
infra-commit5458503e8
job-versionv1.17.0-alpha.0.1579+325dd2567f2520
pod04df390f-db0b-11e9-b559-260d2af1bc04
repok8s.io/kubernetes
repo-commit325dd2567f252099b6862d46b81a5e0fa92f60df
repos{u'k8s.io/kubernetes': u'master:63ca9f61af38d44d15d29a282dd9e3b4f01ff84e,82029:c2e4fee8d91e810012c7b37e24e5d3f8d4177188', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.17.0-alpha.0.1579+325dd2567f2520

Test Failures


Up 1m12s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 976 lines ...
W0919 18:46:57.935] Project: k8s-presubmit-scale
W0919 18:46:57.935] Network Project: k8s-presubmit-scale
W0919 18:46:57.935] Zone: us-east1-b
I0919 18:47:00.851] Bringing down cluster
W0919 18:47:00.951] INSTANCE_GROUPS=
W0919 18:47:00.952] NODE_NAMES=
W0919 18:47:05.135] ERROR: (gcloud.compute.instance-templates.delete) Could not fetch resource:
W0919 18:47:05.136]  - The instance_template resource 'projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82029-ac87c-minion-template' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82029-ac87c-minion-group'
W0919 18:47:05.136] 
W0919 18:47:15.029] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82029-ac87c-windows-node-template].
W0919 18:49:44.806] ssh: connect to host 35.196.101.3 port 22: Connection timed out
W0919 18:49:44.815] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0919 18:49:44.916] Removing etcd replica, name: e2e-82029-ac87c-master, port: 2379, result: 255
W0919 18:51:55.879] ssh: connect to host 35.196.101.3 port 22: Connection timed out
W0919 18:51:55.886] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0919 18:51:55.986] Removing etcd replica, name: e2e-82029-ac87c-master, port: 4002, result: 255
W0919 18:52:02.662] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-master].
W0919 18:53:35.888] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-master].
W0919 18:54:09.657] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82029-ac87c-minion-all].
W0919 18:54:18.427] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82029-ac87c-master-ip].
I0919 18:54:26.920] Deleting nodes e2e-82029-ac87c-minion-group-2k0c e2e-82029-ac87c-minion-group-41ff e2e-82029-ac87c-minion-group-8jrd e2e-82029-ac87c-minion-group-9rv1 e2e-82029-ac87c-minion-group-nqk0 e2e-82029-ac87c-minion-group-xd9v e2e-82029-ac87c-minion-group-xgcn
... skipping 3 lines ...
W0919 18:55:51.606] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-minion-group-9rv1].
W0919 18:56:07.070] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-minion-group-nqk0].
W0919 18:56:12.230] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-minion-group-xgcn].
W0919 18:56:17.335] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-minion-group-xd9v].
I0919 18:56:24.292] Deleting firewall rules remaining in network e2e-82029-ac87c: 
I0919 18:56:25.163] Deleting custom subnet...
W0919 18:56:26.135] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0919 18:56:26.136]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82029-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-minion-group-2k0c'
W0919 18:56:26.136] 
W0919 18:56:31.324] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0919 18:56:31.324]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-82029-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82029-ac87c-minion-group'
W0919 18:56:31.324] 
I0919 18:56:31.425] Failed to delete network 'e2e-82029-ac87c'. Listing firewall-rules:
W0919 18:56:32.575] 
W0919 18:56:32.575] To show all fields of the firewall, please show in JSON format: --format=json
W0919 18:56:32.575] To show all fields in table format, please see the examples in --help.
W0919 18:56:32.575] 
W0919 18:56:32.804] W0919 18:56:32.804620   79224 loader.go:223] Config not found: /workspace/.kube/config
I0919 18:56:32.958] Property "clusters.k8s-presubmit-scale_e2e-82029-ac87c" unset.
... skipping 109 lines ...
W0919 18:57:41.874] done.
I0919 18:57:41.975] NAME                        NETWORK          DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
I0919 18:57:41.975] e2e-82029-ac87c-minion-all  e2e-82029-ac87c  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
I0919 18:57:41.975] Creating nodes.
I0919 18:57:43.666] Using subnet e2e-82029-ac87c-custom-subnet
W0919 18:57:44.775] Instance template e2e-82029-ac87c-minion-template already exists; deleting.
W0919 18:57:45.917] Failed to delete existing instance template
W0919 18:57:45.926] 2019/09/19 18:57:45 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 1m12.512158245s
W0919 18:57:45.927] 2019/09/19 18:57:45 e2e.go:522: Dumping logs locally to: /workspace/_artifacts
W0919 18:57:45.927] 2019/09/19 18:57:45 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0919 18:57:45.987] Trying to find master named 'e2e-82029-ac87c-master'
W0919 18:57:45.987] Looking for address 'e2e-82029-ac87c-master-ip'
I0919 18:57:46.088] Checking for custom logdump instances, if any
... skipping 17 lines ...
W0919 18:58:31.066] scp: /var/log/glbc.log*: No such file or directory
W0919 18:58:31.066] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0919 18:58:31.066] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0919 18:58:31.066] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:58:31.067] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:58:31.067] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:58:31.072] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0919 18:58:31.173] Dumping logs from nodes locally to '/workspace/_artifacts'
I0919 18:58:31.173] Detecting nodes in the cluster
I0919 18:59:13.520] Changing logfiles to be world-readable for download
I0919 18:59:13.596] Changing logfiles to be world-readable for download
I0919 18:59:14.682] Changing logfiles to be world-readable for download
I0919 18:59:14.767] Changing logfiles to be world-readable for download
... skipping 22 lines ...
W0919 18:59:20.185] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:20.185] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:20.185] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:20.186] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:20.186] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:20.186] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:20.190] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:20.334] 
W0919 18:59:20.335] Specify --start=44215 in the next get-serial-port-output invocation to get only the new output starting from here.
W0919 18:59:20.385] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:20.386] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:20.388] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:20.388] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:20.388] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:20.388] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:20.388] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:20.388] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:20.392] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:20.416] 
W0919 18:59:20.416] Specify --start=44375 in the next get-serial-port-output invocation to get only the new output starting from here.
W0919 18:59:21.276] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:21.277] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:21.277] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:21.278] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:21.278] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:21.279] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:21.279] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:21.280] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:21.282] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:21.311] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:21.312] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:21.312] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:21.313] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:21.313] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:21.313] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:21.314] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:21.314] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:21.317] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:21.802] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:21.803] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:21.804] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:21.804] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:21.804] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:21.805] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:21.805] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:21.805] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:21.811] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:21.956] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:21.956] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:21.957] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:21.957] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:21.958] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:21.958] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:21.958] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:21.959] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:21.965] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:21.993] scp: /var/log/kube-proxy.log*: No such file or directory
W0919 18:59:21.993] scp: /var/log/fluentd.log*: No such file or directory
W0919 18:59:21.993] scp: /var/log/node-problem-detector.log*: No such file or directory
W0919 18:59:21.994] scp: /var/log/kubelet.cov*: No such file or directory
W0919 18:59:21.994] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0919 18:59:21.994] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0919 18:59:21.994] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0919 18:59:21.994] scp: /var/log/startupscript.log*: No such file or directory
W0919 18:59:22.002] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0919 18:59:26.918] INSTANCE_GROUPS=e2e-82029-ac87c-minion-group
W0919 18:59:26.918] NODE_NAMES=e2e-82029-ac87c-minion-group-2k0c e2e-82029-ac87c-minion-group-41ff e2e-82029-ac87c-minion-group-8jrd e2e-82029-ac87c-minion-group-9rv1 e2e-82029-ac87c-minion-group-nqk0 e2e-82029-ac87c-minion-group-xd9v e2e-82029-ac87c-minion-group-xgcn
I0919 18:59:27.953] Failures for e2e-82029-ac87c-minion-group (if any):
W0919 18:59:29.792] 2019/09/19 18:59:29 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m43.865354822s
W0919 18:59:29.792] 2019/09/19 18:59:29 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0919 18:59:29.846] Project: k8s-presubmit-scale
... skipping 12 lines ...
W0919 18:59:35.735] NODE_NAMES=e2e-82029-ac87c-minion-group-2k0c e2e-82029-ac87c-minion-group-41ff e2e-82029-ac87c-minion-group-8jrd e2e-82029-ac87c-minion-group-9rv1 e2e-82029-ac87c-minion-group-nqk0 e2e-82029-ac87c-minion-group-xd9v e2e-82029-ac87c-minion-group-xgcn
W0919 18:59:39.940] Deleting Managed Instance Group...
W0919 19:01:28.562] ........................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82029-ac87c-minion-group].
W0919 19:01:28.563] done.
W0919 19:01:37.676] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82029-ac87c-minion-template].
I0919 19:01:56.939] Removing etcd replica, name: e2e-82029-ac87c-master, port: 2379, result: 52
I0919 19:01:58.619] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-82029-ac87c-master, port: 4002, result: 0
W0919 19:02:05.131] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-master].
W0919 19:04:40.240] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82029-ac87c-master].
W0919 19:05:06.019] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82029-ac87c-master-https].
W0919 19:05:07.276] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82029-ac87c-master-etcd].
W0919 19:05:07.874] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82029-ac87c-minion-all].
W0919 19:05:17.126] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82029-ac87c-master-ip].
... skipping 15 lines ...
W0919 19:07:02.405] W0919 19:07:02.405406   88012 loader.go:223] Config not found: /workspace/.kube/config
I0919 19:07:02.506] Property "users.k8s-presubmit-scale_e2e-82029-ac87c-basic-auth" unset.
W0919 19:07:02.619] W0919 19:07:02.619212   88058 loader.go:223] Config not found: /workspace/.kube/config
W0919 19:07:02.620] W0919 19:07:02.620053   88058 loader.go:223] Config not found: /workspace/.kube/config
W0919 19:07:02.630] 2019/09/19 19:07:02 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m32.838169522s
W0919 19:07:02.636] 2019/09/19 19:07:02 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0919 19:07:02.637] 2019/09/19 19:07:02 main.go:319: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0919 19:07:02.637] Traceback (most recent call last):
W0919 19:07:02.637]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0919 19:07:02.652]     main(parse_args())
W0919 19:07:02.653]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0919 19:07:02.653]     mode.start(runner_args)
W0919 19:07:02.653]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
... skipping 3 lines ...
W0919 19:07:02.655]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0919 19:07:02.655]     raise CalledProcessError(retcode, cmd)
W0919 19:07:02.657] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-82029-ac87c', '--gcp-network=e2e-82029-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
I0919 19:07:02.664] Property "contexts.k8s-presubmit-scale_e2e-82029-ac87c" unset.
I0919 19:07:02.665] Cleared config for k8s-presubmit-scale_e2e-82029-ac87c from /workspace/.kube/config
I0919 19:07:02.665] Done
E0919 19:07:02.665] Command failed
I0919 19:07:02.666] process 703 exited with code 1 after 37.7m
E0919 19:07:02.666] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0919 19:07:02.667] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0919 19:07:03.485] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0919 19:07:03.572] process 88068 exited with code 0 after 0.0m
I0919 19:07:03.573] Call:  gcloud config get-value account
I0919 19:07:04.125] process 88080 exited with code 0 after 0.0m
I0919 19:07:04.125] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0919 19:07:04.126] Upload result and artifacts...
I0919 19:07:04.126] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/82029/pull-kubernetes-kubemark-e2e-gce-big/1174751839514529792
I0919 19:07:04.127] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/82029/pull-kubernetes-kubemark-e2e-gce-big/1174751839514529792/artifacts
W0919 19:07:05.684] CommandException: One or more URLs matched no objects.
E0919 19:07:05.827] Command failed
I0919 19:07:05.827] process 88092 exited with code 1 after 0.0m
W0919 19:07:05.828] Remote dir gs://kubernetes-jenkins/pr-logs/pull/82029/pull-kubernetes-kubemark-e2e-gce-big/1174751839514529792/artifacts not exist yet
I0919 19:07:05.828] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82029/pull-kubernetes-kubemark-e2e-gce-big/1174751839514529792/artifacts
I0919 19:07:09.168] process 88234 exited with code 0 after 0.1m
I0919 19:07:09.169] Call:  git rev-parse HEAD
I0919 19:07:09.174] process 88913 exited with code 0 after 0.0m
... skipping 21 lines ...