This job view page is being replaced by Spyglass soon. Check out the new job view.
PRxing-yang: Move CSIDriver Lister to the controller
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-03-16 01:01
Elapsed39m32s
Revision
Buildergke-prow-containerd-pool-99179761-8nmq
Refs master:df2094b3
75129:dd198211
podeddc7d5d-4786-11e9-b821-0a580a6c1069
infra-commit1e02ffffe
job-versionv1.15.0-alpha.0.1230+4852aaeff3ea63
podeddc7d5d-4786-11e9-b821-0a580a6c1069
repok8s.io/kubernetes
repo-commit4852aaeff3ea635b71f7b8a1c305b19cd01b48bf
repos{u'k8s.io/kubernetes': u'master:df2094b3d728bd58c5f23b41add109e32fc7c301,75129:dd1982114fcf51902aaedfda06d61d873d98d208', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1230+4852aaeff3ea63

Test Failures


Up 8m7s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 1185 lines ...
W0316 01:28:05.883] NODE_NAMES=e2e-75129-ac87c-minion-group-2h9d e2e-75129-ac87c-minion-group-51l4 e2e-75129-ac87c-minion-group-fnk8 e2e-75129-ac87c-minion-group-kfb1 e2e-75129-ac87c-minion-group-kjtb e2e-75129-ac87c-minion-group-mlvl e2e-75129-ac87c-minion-group-xj7x
W0316 01:28:05.883] Trying to find master named 'e2e-75129-ac87c-master'
W0316 01:28:05.883] Looking for address 'e2e-75129-ac87c-master-ip'
I0316 01:28:06.718] Waiting up to 300 seconds for cluster initialization.
I0316 01:28:06.719] 
I0316 01:28:06.719]   This will continually check to see if the API for kubernetes is reachable.
I0316 01:28:06.719]   This may time out if there was some uncaught error during start up.
I0316 01:28:06.719] 
I0316 01:33:08.755] ...................................................................................................................................................Checking for custom logdump instances, if any
I0316 01:33:08.760] Sourcing kube-util.sh
I0316 01:33:08.824] Detecting project
I0316 01:33:08.825] Project: k8s-presubmit-scale
I0316 01:33:08.825] Network Project: k8s-presubmit-scale
I0316 01:33:08.825] Zone: us-east1-b
I0316 01:33:08.825] Dumping logs from master locally to '/workspace/_artifacts'
W0316 01:33:08.926] Using master: e2e-75129-ac87c-master (external IP: 35.231.0.96)
W0316 01:33:08.926] Cluster failed to initialize within 300 seconds.
W0316 01:33:08.927] Last output from querying API server follows:
W0316 01:33:08.927] -----------------------------------------------------
W0316 01:33:08.927]   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
W0316 01:33:08.927]                                  Dload  Upload   Total   Spent    Left  Speed
W0316 01:33:08.927] 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 35.231.0.96 port 443: Connection refused
W0316 01:33:08.927] -----------------------------------------------------
W0316 01:33:08.927] 2019/03/16 01:33:08 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 8m7.0343314s
W0316 01:33:08.928] 2019/03/16 01:33:08 e2e.go:522: Dumping logs locally to: /workspace/_artifacts
W0316 01:33:08.928] 2019/03/16 01:33:08 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0316 01:33:08.928] Trying to find master named 'e2e-75129-ac87c-master'
W0316 01:33:08.928] Looking for address 'e2e-75129-ac87c-master-ip'
... skipping 4 lines ...
W0316 01:33:44.886] Specify --start=42813 in the next get-serial-port-output invocation to get only the new output starting from here.
W0316 01:33:47.138] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0316 01:33:47.139] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0316 01:33:47.139] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:33:47.139] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:33:47.139] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:33:47.221] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0316 01:33:47.322] Dumping logs from nodes locally to '/workspace/_artifacts'
I0316 01:33:47.322] Detecting nodes in the cluster
I0316 01:34:24.998] Changing logfiles to be world-readable for download
I0316 01:34:25.501] Changing logfiles to be world-readable for download
I0316 01:34:26.226] Changing logfiles to be world-readable for download
I0316 01:34:26.239] Changing logfiles to be world-readable for download
... skipping 25 lines ...
W0316 01:34:31.424] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:31.424] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:31.424] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:31.424] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:31.425] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:31.425] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:31.427] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:31.534] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:31.534] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:31.535] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:31.535] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:31.535] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:31.535] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:31.535] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:31.539] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:32.249] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:32.250] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:32.250] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:32.250] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:32.250] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:32.251] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:32.251] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:32.254] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:32.331] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:32.331] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:32.331] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:32.332] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:32.332] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:32.332] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:32.332] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:32.335] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:32.423] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:32.424] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:32.424] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:32.424] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:32.424] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:32.424] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:32.424] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:32.428] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:32.621] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:32.621] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:32.621] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:32.621] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:32.622] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:32.622] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:32.622] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:32.625] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:32.660] scp: /var/log/fluentd.log*: No such file or directory
W0316 01:34:32.660] scp: /var/log/node-problem-detector.log*: No such file or directory
W0316 01:34:32.660] scp: /var/log/kubelet.cov*: No such file or directory
W0316 01:34:32.660] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0316 01:34:32.660] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0316 01:34:32.661] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0316 01:34:32.661] scp: /var/log/startupscript.log*: No such file or directory
W0316 01:34:32.664] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0316 01:34:35.968] INSTANCE_GROUPS=e2e-75129-ac87c-minion-group
W0316 01:34:35.968] NODE_NAMES=e2e-75129-ac87c-minion-group-2h9d e2e-75129-ac87c-minion-group-51l4 e2e-75129-ac87c-minion-group-fnk8 e2e-75129-ac87c-minion-group-kfb1 e2e-75129-ac87c-minion-group-kjtb e2e-75129-ac87c-minion-group-mlvl e2e-75129-ac87c-minion-group-xj7x
I0316 01:34:36.859] Failures for e2e-75129-ac87c-minion-group
W0316 01:34:38.216] 2019/03/16 01:34:38 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m29.465100833s
W0316 01:34:38.217] 2019/03/16 01:34:38 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0316 01:34:38.274] Project: k8s-presubmit-scale
... skipping 35 lines ...
I0316 01:40:59.687] Property "users.k8s-presubmit-scale_e2e-75129-ac87c-basic-auth" unset.
I0316 01:40:59.841] Property "contexts.k8s-presubmit-scale_e2e-75129-ac87c" unset.
I0316 01:40:59.846] Cleared config for k8s-presubmit-scale_e2e-75129-ac87c from /workspace/.kube/config
I0316 01:40:59.847] Done
W0316 01:40:59.947] 2019/03/16 01:40:59 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m21.632631929s
W0316 01:40:59.948] 2019/03/16 01:40:59 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0316 01:40:59.948] 2019/03/16 01:40:59 main.go:307: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0316 01:40:59.948] Traceback (most recent call last):
W0316 01:40:59.948]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0316 01:40:59.976]     main(parse_args())
W0316 01:40:59.976]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0316 01:40:59.977]     mode.start(runner_args)
W0316 01:40:59.977]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0316 01:40:59.977]     check_env(env, self.command, *args)
W0316 01:40:59.977]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0316 01:40:59.977]     subprocess.check_call(cmd, env=env)
W0316 01:40:59.977]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0316 01:41:00.040]     raise CalledProcessError(retcode, cmd)
W0316 01:41:00.041] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-75129-ac87c', '--gcp-network=e2e-75129-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0316 01:41:00.054] Command failed
I0316 01:41:00.054] process 733 exited with code 1 after 38.2m
E0316 01:41:00.054] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0316 01:41:00.055] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0316 01:41:00.784] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0316 01:41:00.852] process 82591 exited with code 0 after 0.0m
I0316 01:41:00.853] Call:  gcloud config get-value account
I0316 01:41:01.295] process 82603 exited with code 0 after 0.0m
I0316 01:41:01.295] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0316 01:41:01.295] Upload result and artifacts...
I0316 01:41:01.295] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75129/pull-kubernetes-kubemark-e2e-gce-big/41516
I0316 01:41:01.296] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75129/pull-kubernetes-kubemark-e2e-gce-big/41516/artifacts
W0316 01:41:02.509] CommandException: One or more URLs matched no objects.
E0316 01:41:02.656] Command failed
I0316 01:41:02.656] process 82615 exited with code 1 after 0.0m
W0316 01:41:02.656] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75129/pull-kubernetes-kubemark-e2e-gce-big/41516/artifacts not exist yet
I0316 01:41:02.657] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75129/pull-kubernetes-kubemark-e2e-gce-big/41516/artifacts
I0316 01:41:05.798] process 82757 exited with code 0 after 0.1m
I0316 01:41:05.799] Call:  git rev-parse HEAD
I0316 01:41:05.804] process 83442 exited with code 0 after 0.0m
... skipping 21 lines ...