This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjoakimr-axis: Fix shellcheck warnings/errors in /build/lib/release.sh
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-17 20:56
Elapsed48m7s
Revision
Buildergke-prow-default-pool-cf4891d4-ppqk
Refs master:f88f58cb
87285:41ba2478
podbc04ea8e-396b-11ea-9c9f-1eaf3426b6e4
infra-commit6384054e3
podbc04ea8e-396b-11ea-9c9f-1eaf3426b6e4
repok8s.io/kubernetes
repo-commit176e6b2d8230e87585d68dccd1a216dc3d488b63
repos{u'k8s.io/kubernetes': u'master:f88f58cb7938ee54216b390b07dd0745ee59fcc1,87285:41ba24782dc0c3263d84be8319604fb7937c0a66'}

No Test Failures!


Error lines from build-log.txt

... skipping 194 lines ...
I0117 21:44:27.341] +++ [0117 21:44:27] Waiting on tarballs
W0117 21:44:27.482] tar: ./metadata-proxy/gce/metadata-proxy.yaml\n./metadata-proxy/gce/podsecuritypolicies/metadata-proxy-psp-binding.yaml\n./rbac/legacy-kubelet-user-disable/kubelet-binding.yaml\n./rbac/kubelet-cert-rotation/kubelet-certificate-management.yaml\n./rbac/cluster-loadbalancing/glbc/user-rolebindings.yaml\n./rbac/cluster-loadbalancing/glbc/roles.yaml\n./rbac/legacy-kubelet-user/kubelet-binding.yaml\n./rbac/kubelet-api-auth/kubelet-api-admin-role.yaml\n./rbac/kubelet-api-auth/kube-apiserver-kubelet-api-admin-binding.yaml\n./rbac/cluster-autoscaler/cluster-autoscaler-rbac.yaml\n./storage-class/vsphere/default.yaml\n./storage-class/gce/default.yaml\n./storage-class/local/default.yaml\n./storage-class/aws/default.yaml\n./storage-class/azure/default.yaml\n./storage-class/openstack/default.yaml\n./metrics-server/auth-reader.yaml\n./metrics-server/metrics-server-service.yaml\n./metrics-server/auth-delegator.yaml\n./metrics-server/resource-reader.yaml\n./metrics-server/metrics-apiservice.yaml\n./metrics-server/metrics-server-deployment.yaml\n./dashboard/dashboard-secret.yaml\n./dashboard/dashboard-service.yaml\n./dashboard/dashboard-rbac.yaml\n./dashboard/dashboard-configmap.yaml\n./dashboard/dashboard-deployment.yaml\n./volumesnapshots/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml\n./volumesnapshots/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml\n./volumesnapshots/crd/snapshot.storage.k8s.io_volumesnapshots.yaml\n./volumesnapshots/volume-snapshot-controller/rbac-volume-snapshot-controller.yaml\n./volumesnapshots/volume-snapshot-controller/volume-snapshot-controller-deployment.yaml\n./cluster-loadbalancing/glbc/default-svc-controller.yaml\n./cluster-loadbalancing/glbc/default-svc.yaml\n./kube-proxy/kube-proxy-rbac.yaml\n./kube-proxy/kube-proxy-ds.yaml\n./metadata-agent/stackdriver/metadata-agent-rbac.yaml\n./metadata-agent/stackdriver/podsecuritypolicies/metadata-agent-psp-binding.yaml\n./metadata-agent/stackdriver/metadata-agent.yaml\n./dns/kube-dns/kube-dns.yaml.in\n./dns/coredns/coredns.yaml.in\n./dns/nodelocaldns/nodelocaldns.yaml\n./ip-masq-agent/ip-masq-agent.yaml\n./ip-masq-agent/podsecuritypolicies/ip-masq-agent-psp-binding.yaml\n./device-plugins/nvidia-gpu/daemonset.yaml\n./node-problem-detector/standalone/npd-binding.yaml\n./node-problem-detector/npd.yaml\n./node-problem-detector/podsecuritypolicies/npd-psp-binding.yaml\n./node-problem-detector/kubelet-user-standalone/npd-binding.yaml\n./calico-policy-controller/ipamhandle-crd.yaml\n./calico-policy-controller/calico-cpva-clusterrolebinding.yaml\n./calico-policy-controller/felixconfigurations-crd.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-configmap.yaml\n./calico-policy-controller/typha-deployment.yaml\n./calico-policy-controller/typha-vertical-autoscaler-clusterrolebinding.yaml\n./calico-policy-controller/calico-serviceaccount.yaml\n./calico-policy-controller/globalbgpconfig-crd.yaml\n./calico-policy-controller/calico-clusterrole.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-rolebinding.yaml\n./calico-policy-controller/networkset-crd.yaml\n./calico-policy-controller/calico-node-vertical-autoscaler-deployment.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-deployment.yaml\n./calico-policy-controller/blockaffinity-crd.yaml\n./calico-policy-controller/calico-cpva-serviceaccount.yaml\n./calico-policy-controller/ippool-crd.yaml\n./calico-policy-controller/calico-node-vertical-autoscaler-configmap.yaml\n./calico-policy-controller/bgppeers-crd.yaml\n./calico-policy-controller/typha-vertical-autoscaler-deployment.yaml\n./calico-policy-controller/typha-vertical-autoscaler-clusterrole.yaml\n./calico-policy-controller/globalnetworksets-crd.yaml\n./calico-policy-controller/globalnetworkpolicy-crd.yaml\n./calico-policy-controller/typha-vertical-autoscaler-serviceaccount.yaml\n./calico-policy-controller/calico-node-daemonset.yaml\n./calico-policy-controller/globalfelixconfig-crd.yaml\n./calico-policy-controller/networkpolicies-crd.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-clusterrole.yaml\n./calico-policy-controller/calico-clusterrolebinding.yaml\n./calico-policy-controller/typha-vertical-autoscaler-configmap.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-role.yaml\n./calico-policy-controller/clusterinformations-crd.yaml\n./calico-policy-controller/podsecuritypolicies/calico-node-psp-binding.yaml\n./calico-policy-controller/ipamblock-crd.yaml\n./calico-policy-controller/bgpconfigurations-crd.yaml\n./calico-policy-controller/hostendpoints-crd.yaml\n./calico-policy-controller/calico-cpva-clusterrole.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-serviceaccount.yaml\n./calico-policy-controller/ipamconfig-crd.yaml\n./calico-policy-controller/typha-service.yaml\n./calico-policy-controller/typha-horizontal-autoscaler-clusterrolebinding.yaml\n./fluentd-gcp/fluentd-gcp-ds.yaml\n./fluentd-gcp/scaler-policy.yaml\n./fluentd-gcp/scaler-rbac.yaml\n./fluentd-gcp/fluentd-gcp-configmap-old.yaml\n./fluentd-gcp/fluentd-gcp-configmap.yaml\n./fluentd-gcp/fluentd-gcp-ds-sa.yaml\n./fluentd-gcp/podsecuritypolicies/event-exporter-psp.yaml\n./fluentd-gcp/podsecuritypolicies/event-exporter-psp-binding.yaml\n./fluentd-gcp/podsecuritypolicies/fluentd-gcp-psp-binding.yaml\n./fluentd-gcp/podsecuritypolicies/fluentd-gcp-psp.yaml\n./fluentd-gcp/podsecuritypolicies/event-exporter-psp-role.yaml\n./fluentd-gcp/podsecuritypolicies/fluentd-gcp-psp-role.yaml\n./fluentd-gcp/event-exporter.yaml\n./fluentd-gcp/scaler-deployment.yaml\n./dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml\n./fluentd-elasticsearch/kibana-deployment.yaml\n./fluentd-elasticsearch/fluentd-es-configmap.yaml\n./fluentd-elasticsearch/kibana-service.yaml\n./fluentd-elasticsearch/es-service.yaml\n./fluentd-elasticsearch/fluentd-es-ds.yaml\n./fluentd-elasticsearch/es-statefulset.yaml\n./fluentd-elasticsearch/podsecuritypolicies/es-psp-binding.yaml: Cannot stat: File name too long
W0117 21:44:27.483] tar: Exiting with failure status due to previous errors
W0117 21:44:27.507] !!! [0117 21:44:27] Call tree:
W0117 21:44:27.511] !!! [0117 21:44:27]  1: /workspace/k8s.io/kubernetes/build/lib/release.sh:94 kube::release::package_kube_manifests_tarball(...)
W0117 21:44:27.518] !!! [0117 21:44:27]  2: build/release.sh:45 kube::release::package_tarballs(...)
W0117 21:44:37.492] !!! [0117 21:44:37] previous tarball phase failed
W0117 21:44:37.501] make: *** [Makefile:405: release] Error 1
W0117 21:44:37.503] Traceback (most recent call last):
W0117 21:44:37.503]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 50, in <module>
W0117 21:44:37.503]     main(ARGS.env, ARGS.cmd + ARGS.args)
W0117 21:44:37.503]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 41, in main
W0117 21:44:37.503]     check(*cmd)
W0117 21:44:37.503]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0117 21:44:37.504]     subprocess.check_call(cmd)
W0117 21:44:37.504]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0117 21:44:37.504]     raise CalledProcessError(retcode, cmd)
W0117 21:44:37.504] subprocess.CalledProcessError: Command '('make', 'release')' returned non-zero exit status 2
E0117 21:44:37.512] Command failed
I0117 21:44:37.512] process 725 exited with code 1 after 46.7m
E0117 21:44:37.513] FAIL: pull-kubernetes-cross
I0117 21:44:37.513] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0117 21:44:38.393] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0117 21:44:38.445] process 296623 exited with code 0 after 0.0m
I0117 21:44:38.445] Call:  gcloud config get-value account
I0117 21:44:38.792] process 296636 exited with code 0 after 0.0m
I0117 21:44:38.793] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 28 lines ...