This job view page is being replaced by Spyglass soon. Check out the new job view.
PRbart0sh: Pass pod annotations to Allocate
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2019-03-15 20:11
Elapsed48m25s
Revision
Buildergke-prow-containerd-pool-99179761-39jm
Refs master:df2094b3
61775:5d1012de
pod62c12f95-475e-11e9-bd0d-0a580a6c132b
infra-commit5def04463
job-versionv1.15.0-alpha.0.1230+bfc470e4e5ae5b
pod62c12f95-475e-11e9-bd0d-0a580a6c132b
repok8s.io/kubernetes
repo-commitbfc470e4e5ae5bac94ba19cf8c33e5f6b7938f34
repos{u'k8s.io/kubernetes': u'master:df2094b3d728bd58c5f23b41add109e32fc7c301,61775:5d1012de6ab6e540578a9bd743b6d571682ea883', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1230+bfc470e4e5ae5b

Test Failures


ClusterLoaderV2 59s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 1170 lines ...
W0315 20:41:56.281] Trying to find master named 'e2e-61775-ac87c-master'
W0315 20:41:56.282] Looking for address 'e2e-61775-ac87c-master-ip'
W0315 20:41:57.103] Using master: e2e-61775-ac87c-master (external IP: 35.227.22.75)
I0315 20:41:57.204] Waiting up to 300 seconds for cluster initialization.
I0315 20:41:57.204] 
I0315 20:41:57.204]   This will continually check to see if the API for kubernetes is reachable.
I0315 20:41:57.205]   This may time out if there was some uncaught error during start up.
I0315 20:41:57.205] 
I0315 20:42:14.954] ....Kubernetes cluster created.
I0315 20:42:15.089] Cluster "k8s-presubmit-scale_e2e-61775-ac87c" set.
I0315 20:42:15.221] User "k8s-presubmit-scale_e2e-61775-ac87c" set.
I0315 20:42:15.362] Context "k8s-presubmit-scale_e2e-61775-ac87c" created.
I0315 20:42:15.504] Switched to context "k8s-presubmit-scale_e2e-61775-ac87c".
... skipping 22 lines ...
I0315 20:42:54.351] e2e-61775-ac87c-minion-group-3wgs   Ready                      <none>   12s   v1.15.0-alpha.0.1230+bfc470e4e5ae5b
I0315 20:42:54.351] e2e-61775-ac87c-minion-group-c8bx   Ready                      <none>   12s   v1.15.0-alpha.0.1230+bfc470e4e5ae5b
I0315 20:42:54.352] e2e-61775-ac87c-minion-group-jgdb   Ready                      <none>   10s   v1.15.0-alpha.0.1230+bfc470e4e5ae5b
I0315 20:42:54.352] e2e-61775-ac87c-minion-group-jqzx   Ready                      <none>   18s   v1.15.0-alpha.0.1230+bfc470e4e5ae5b
I0315 20:42:54.352] e2e-61775-ac87c-minion-group-l9td   Ready                      <none>   11s   v1.15.0-alpha.0.1230+bfc470e4e5ae5b
I0315 20:42:54.676] Validate output:
I0315 20:42:54.980] NAME                 STATUS    MESSAGE             ERROR
I0315 20:42:54.980] controller-manager   Healthy   ok                  
I0315 20:42:54.980] etcd-0               Healthy   {"health":"true"}   
I0315 20:42:54.981] etcd-1               Healthy   {"health":"true"}   
I0315 20:42:54.981] scheduler            Healthy   ok                  
I0315 20:42:54.985] Cluster validation succeeded
W0315 20:42:55.085] Done, listing cluster services:
... skipping 60 lines ...
W0315 20:43:26.756] 2019/03/15 20:43:26 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 4m24.381261282s
W0315 20:43:26.756] 2019/03/15 20:43:26 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W0315 20:43:27.078] 2019/03/15 20:43:27 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 365.345014ms
W0315 20:43:27.078] 2019/03/15 20:43:27 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml
W0315 20:43:27.478] 2019/03/15 20:43:27 process.go:155: Step './cluster/kubectl.sh --match-server-version=false get nodes -oyaml' finished in 400.184581ms
W0315 20:43:27.479] 2019/03/15 20:43:27 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 20:43:28.951] ERROR: (gcloud.compute.instances.delete) Could not fetch resource:
W0315 20:43:28.952]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-61775-ac87c-kubemark-master' was not found
W0315 20:43:28.952] 
W0315 20:43:29.840] ERROR: (gcloud.compute.disks.delete) Could not fetch resource:
W0315 20:43:29.840]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-61775-ac87c-kubemark-master-pd' was not found
W0315 20:43:29.840] 
W0315 20:43:31.720] ERROR: (gcloud.compute.addresses.delete) Could not fetch resource:
W0315 20:43:31.723]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-61775-ac87c-kubemark-master-ip' was not found
W0315 20:43:31.723] 
W0315 20:43:32.445] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0315 20:43:32.445]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-61775-ac87c-kubemark-master-https' was not found
W0315 20:43:32.446] 
W0315 20:43:32.501] 2019/03/15 20:43:32 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 5.022163093s
W0315 20:43:32.501] 2019/03/15 20:43:32 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0315 20:43:32.554] Project: k8s-presubmit-scale
W0315 20:43:32.555] Network Project: k8s-presubmit-scale
W0315 20:43:32.555] Zone: us-east1-b
I0315 20:43:33.033] Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+bfc470e4e5ae5b", GitCommit:"bfc470e4e5ae5bac94ba19cf8c33e5f6b7938f34", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
I0315 20:43:33.034] Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+bfc470e4e5ae5b", GitCommit:"bfc470e4e5ae5bac94ba19cf8c33e5f6b7938f34", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
W0315 20:43:33.134] 2019/03/15 20:43:33 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 537.493681ms
W0315 20:43:33.134] 2019/03/15 20:43:33 process.go:153: Running: ./test/kubemark/start-kubemark.sh
W0315 20:43:33.891] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0315 20:43:33.891]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-61775-ac87c-kubemark-master-ip' was not found
W0315 20:43:33.891] 
I0315 20:43:42.069] Created [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-61775-ac87c-kubemark-master-ip].
I0315 20:43:42.386] Succeeded to gcloud compute addresses.
I0315 20:43:43.206] Generating certs for alternate-names: IP:35.231.72.138,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-61775-ac87c-kubemark-master
I0315 20:43:46.298] Generated PKI authentication data for kubemark.
... skipping 660 lines ...
W0315 20:49:05.080] I0315 20:49:05.080640   91473 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-roleSpecificNamespaces.yaml
W0315 20:49:05.206] I0315 20:49:05.205795   91473 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-service.yaml
W0315 20:49:05.254] I0315 20:49:05.254358   91473 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceAccount.yaml
W0315 20:49:05.297] I0315 20:49:05.297271   91473 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0315 20:49:05.343] I0315 20:49:05.343293   91473 prometheus.go:172] Exposing kube-apiserver metrics in kubemark cluster
W0315 20:49:05.543] I0315 20:49:05.543055   91473 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/kubemark/kube-apiserver-endpoints.yaml
W0315 20:49:05.568] F0315 20:49:05.568165   91473 clusterloader.go:203] Error while setting up prometheus stack: unmarshaling error: yaml: line 14: could not find expected ':'
W0315 20:49:05.570] goroutine 1 [running]:
W0315 20:49:05.570] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.stacks(0xc00085c300, 0xc000596000, 0x9c, 0x1b7)
W0315 20:49:05.571] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:830 +0xb1
W0315 20:49:05.571] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).output(0x25b5be0, 0xc000000003, 0xc0003f7f80, 0x2528909, 0x10, 0xcb, 0x0)
W0315 20:49:05.571] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:781 +0x25e
W0315 20:49:05.571] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).printf(0x25b5be0, 0x3, 0x16175e4, 0x2b, 0xc000679d50, 0x1, 0x1)
... skipping 34 lines ...
W0315 20:49:39.061] scp: /var/log/kube-apiserver-audit.log*: No such file or directory
W0315 20:49:39.360] scp: /var/log/glbc.log*: No such file or directory
W0315 20:49:39.360] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 20:49:39.434] scp: /var/log/fluentd.log*: No such file or directory
W0315 20:49:39.434] scp: /var/log/kubelet.cov*: No such file or directory
W0315 20:49:39.434] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:49:39.588] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 20:49:39.690] Skipping dumping of node logs
W0315 20:49:39.790] 2019/03/15 20:49:39 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 34.012558064s
W0315 20:49:39.791] 2019/03/15 20:49:39 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 20:49:40.586] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 20:49:40.663] scp: /var/log/fluentd.log*: No such file or directory
W0315 20:49:40.663] scp: /var/log/kubelet.cov*: No such file or directory
W0315 20:49:40.664] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:49:40.667] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 20:49:40.767] Dumping logs from nodes locally to '/workspace/_artifacts'
I0315 20:49:40.768] Detecting nodes in the cluster
I0315 20:50:16.871] Changing logfiles to be world-readable for download
I0315 20:50:16.937] Changing logfiles to be world-readable for download
I0315 20:50:17.221] Changing logfiles to be world-readable for download
I0315 20:50:17.285] Changing logfiles to be world-readable for download
... skipping 40 lines ...
W0315 20:50:24.874] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 20:50:24.874] scp: /var/log/kubelet.cov*: No such file or directory
W0315 20:50:25.050] scp: /var/log/fluentd.log*: No such file or directory
W0315 20:50:25.050] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 20:50:25.050] scp: /var/log/kubelet.cov*: No such file or directory
W0315 20:50:36.622] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:36.627] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:39.481] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:39.485] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:40.137] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:40.141] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:40.272] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:40.276] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:40.530] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:40.534] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:40.936] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:40.940] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:41.498] scp: /var/log/startupscript.log*: No such file or directory
W0315 20:50:41.503] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 20:50:44.815] INSTANCE_GROUPS=e2e-61775-ac87c-minion-group
W0315 20:50:44.816] NODE_NAMES=e2e-61775-ac87c-minion-group-24k4 e2e-61775-ac87c-minion-group-3vqb e2e-61775-ac87c-minion-group-3wgs e2e-61775-ac87c-minion-group-c8bx e2e-61775-ac87c-minion-group-jgdb e2e-61775-ac87c-minion-group-jqzx e2e-61775-ac87c-minion-group-l9td
I0315 20:50:45.713] Failures for e2e-61775-ac87c-minion-group
W0315 20:50:47.538] 2019/03/15 20:50:47 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m41.8599736s
W0315 20:50:47.539] 2019/03/15 20:50:47 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0315 20:50:47.595] Project: k8s-presubmit-scale
... skipping 19 lines ...
W0315 20:51:39.231] ....Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-61775-ac87c-kubemark-master-https].
W0315 20:51:39.491] 2019/03/15 20:51:39 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 1m59.799255437s
W0315 20:54:23.429] ................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-61775-ac87c-minion-group].
W0315 20:54:23.430] done.
W0315 20:54:29.121] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-61775-ac87c-minion-template].
W0315 20:54:36.207] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-61775-ac87c-windows-node-template].
I0315 20:54:41.556] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-61775-ac87c-master, port: 2379, result: 0
I0315 20:54:43.077] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-61775-ac87c-master, port: 4002, result: 0
W0315 20:54:49.535] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-61775-ac87c-master].
W0315 20:57:04.329] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-61775-ac87c-master].
W0315 20:57:24.958] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-61775-ac87c-master-https].
W0315 20:57:30.745] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-61775-ac87c-master-etcd].
W0315 20:57:31.167] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-61775-ac87c-minion-all].
W0315 20:57:40.249] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-61775-ac87c-master-ip].
... skipping 9 lines ...
I0315 20:59:23.472] Property "users.k8s-presubmit-scale_e2e-61775-ac87c-basic-auth" unset.
I0315 20:59:23.614] Property "contexts.k8s-presubmit-scale_e2e-61775-ac87c" unset.
I0315 20:59:23.617] Cleared config for k8s-presubmit-scale_e2e-61775-ac87c from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0315 20:59:23.618] Done
W0315 20:59:23.718] 2019/03/15 20:59:23 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 8m36.080975485s
W0315 20:59:23.718] 2019/03/15 20:59:23 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0315 20:59:23.719] 2019/03/15 20:59:23 main.go:307: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255]
W0315 20:59:23.719] Traceback (most recent call last):
W0315 20:59:23.719]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0315 20:59:23.719]     main(parse_args())
W0315 20:59:23.719]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0315 20:59:23.719]     mode.start(runner_args)
W0315 20:59:23.720]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0315 20:59:23.720]     check_env(env, self.command, *args)
W0315 20:59:23.720]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0315 20:59:23.720]     subprocess.check_call(cmd, env=env)
W0315 20:59:23.720]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 20:59:23.737]     raise CalledProcessError(retcode, cmd)
W0315 20:59:23.738] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-61775-ac87c', '--gcp-network=e2e-61775-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0315 20:59:23.748] Command failed
I0315 20:59:23.748] process 704 exited with code 1 after 42.1m
E0315 20:59:23.748] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0315 20:59:23.748] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 20:59:24.410] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 20:59:24.476] process 95680 exited with code 0 after 0.0m
I0315 20:59:24.477] Call:  gcloud config get-value account
I0315 20:59:24.927] process 95692 exited with code 0 after 0.0m
I0315 20:59:24.927] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 20:59:24.927] Upload result and artifacts...
I0315 20:59:24.927] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/61775/pull-kubernetes-kubemark-e2e-gce-big/41493
I0315 20:59:24.928] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/61775/pull-kubernetes-kubemark-e2e-gce-big/41493/artifacts
W0315 20:59:25.986] CommandException: One or more URLs matched no objects.
E0315 20:59:26.099] Command failed
I0315 20:59:26.099] process 95704 exited with code 1 after 0.0m
W0315 20:59:26.099] Remote dir gs://kubernetes-jenkins/pr-logs/pull/61775/pull-kubernetes-kubemark-e2e-gce-big/41493/artifacts not exist yet
I0315 20:59:26.099] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/61775/pull-kubernetes-kubemark-e2e-gce-big/41493/artifacts
I0315 20:59:45.045] process 95846 exited with code 0 after 0.3m
I0315 20:59:45.046] Call:  git rev-parse HEAD
I0315 20:59:45.050] process 98048 exited with code 0 after 0.0m
... skipping 21 lines ...