This job view page is being replaced by Spyglass soon. Check out the new job view.
PRrramkumar1: Plumb CUSTOM_INGRESS_YAML into other setup scripts
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2019-03-15 20:52
Elapsed44m10s
Revision
Buildergke-prow-containerd-pool-99179761-21wb
Refs master:df2094b3
75381:8ac15cd5
pod199f9ec1-4764-11e9-be52-0a580a6c0982
infra-commit5def04463
job-versionv1.15.0-alpha.0.1230+418fc85b863ee0
pod199f9ec1-4764-11e9-be52-0a580a6c0982
repok8s.io/kubernetes
repo-commit418fc85b863ee0020d48fb093f603768c0b212a8
repos{u'k8s.io/kubernetes': u'master:df2094b3d728bd58c5f23b41add109e32fc7c301,75381:8ac15cd54a202b325679f4d0ad6a81381ade9950', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1230+418fc85b863ee0

Test Failures


ClusterLoaderV2 1m9s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 1163 lines ...
W0315 21:18:23.413] Trying to find master named 'e2e-75381-ac87c-master'
W0315 21:18:23.413] Looking for address 'e2e-75381-ac87c-master-ip'
W0315 21:18:24.180] Using master: e2e-75381-ac87c-master (external IP: 35.243.188.88)
I0315 21:18:24.281] Waiting up to 300 seconds for cluster initialization.
I0315 21:18:24.281] 
I0315 21:18:24.281]   This will continually check to see if the API for kubernetes is reachable.
I0315 21:18:24.282]   This may time out if there was some uncaught error during start up.
I0315 21:18:24.282] 
I0315 21:18:40.281] ...Kubernetes cluster created.
I0315 21:18:40.415] Cluster "k8s-presubmit-scale_e2e-75381-ac87c" set.
I0315 21:18:40.548] User "k8s-presubmit-scale_e2e-75381-ac87c" set.
I0315 21:18:40.687] Context "k8s-presubmit-scale_e2e-75381-ac87c" created.
I0315 21:18:40.824] Switched to context "k8s-presubmit-scale_e2e-75381-ac87c".
... skipping 23 lines ...
I0315 21:19:15.875] e2e-75381-ac87c-minion-group-7mtd   Ready                      <none>   5s    v1.15.0-alpha.0.1230+418fc85b863ee0
I0315 21:19:15.875] e2e-75381-ac87c-minion-group-bggw   Ready                      <none>   5s    v1.15.0-alpha.0.1230+418fc85b863ee0
I0315 21:19:15.876] e2e-75381-ac87c-minion-group-dgkb   Ready                      <none>   5s    v1.15.0-alpha.0.1230+418fc85b863ee0
I0315 21:19:15.876] e2e-75381-ac87c-minion-group-k2wj   Ready                      <none>   5s    v1.15.0-alpha.0.1230+418fc85b863ee0
I0315 21:19:15.876] e2e-75381-ac87c-minion-group-rm3d   Ready                      <none>   5s    v1.15.0-alpha.0.1230+418fc85b863ee0
I0315 21:19:16.190] Validate output:
I0315 21:19:16.493] NAME                 STATUS    MESSAGE             ERROR
I0315 21:19:16.493] controller-manager   Healthy   ok                  
I0315 21:19:16.494] etcd-0               Healthy   {"health":"true"}   
I0315 21:19:16.494] scheduler            Healthy   ok                  
I0315 21:19:16.494] etcd-1               Healthy   {"health":"true"}   
I0315 21:19:16.497] Cluster validation succeeded
W0315 21:19:16.598] Done, listing cluster services:
... skipping 60 lines ...
W0315 21:19:43.047] 2019/03/15 21:19:43 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 4m27.478419258s
W0315 21:19:43.047] 2019/03/15 21:19:43 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W0315 21:19:43.356] 2019/03/15 21:19:43 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 343.168295ms
W0315 21:19:43.356] 2019/03/15 21:19:43 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml
W0315 21:19:43.791] 2019/03/15 21:19:43 process.go:155: Step './cluster/kubectl.sh --match-server-version=false get nodes -oyaml' finished in 435.497419ms
W0315 21:19:43.831] 2019/03/15 21:19:43 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 21:19:45.506] ERROR: (gcloud.compute.instances.delete) Could not fetch resource:
W0315 21:19:45.506]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75381-ac87c-kubemark-master' was not found
W0315 21:19:45.506] 
W0315 21:19:46.520] ERROR: (gcloud.compute.disks.delete) Could not fetch resource:
W0315 21:19:46.520]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-75381-ac87c-kubemark-master-pd' was not found
W0315 21:19:46.520] 
W0315 21:19:48.593] ERROR: (gcloud.compute.addresses.delete) Could not fetch resource:
W0315 21:19:48.593]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75381-ac87c-kubemark-master-ip' was not found
W0315 21:19:48.593] 
W0315 21:19:49.408] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0315 21:19:49.408]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-75381-ac87c-kubemark-master-https' was not found
W0315 21:19:49.409] 
W0315 21:19:49.477] 2019/03/15 21:19:49 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 5.645512838s
W0315 21:19:49.477] 2019/03/15 21:19:49 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0315 21:19:49.547] Project: k8s-presubmit-scale
W0315 21:19:49.547] Network Project: k8s-presubmit-scale
W0315 21:19:49.547] Zone: us-east1-b
I0315 21:19:49.942] Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+418fc85b863ee0", GitCommit:"418fc85b863ee0020d48fb093f603768c0b212a8", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
I0315 21:19:49.942] Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1230+418fc85b863ee0", GitCommit:"418fc85b863ee0020d48fb093f603768c0b212a8", GitTreeState:"clean", BuildDate:"2019-03-15T19:40:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
W0315 21:19:50.043] 2019/03/15 21:19:49 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 470.921225ms
W0315 21:19:50.043] 2019/03/15 21:19:49 process.go:153: Running: ./test/kubemark/start-kubemark.sh
W0315 21:19:50.961] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0315 21:19:50.962]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75381-ac87c-kubemark-master-ip' was not found
W0315 21:19:50.962] 
I0315 21:19:56.650] Created [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75381-ac87c-kubemark-master-ip].
I0315 21:19:56.949] Succeeded to gcloud compute addresses.
I0315 21:19:57.985] Generating certs for alternate-names: IP:35.237.188.56,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-75381-ac87c-kubemark-master
I0315 21:19:59.885] Generated PKI authentication data for kubemark.
... skipping 659 lines ...
W0315 21:25:20.362] I0315 21:25:20.361911   93033 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-roleSpecificNamespaces.yaml
W0315 21:25:20.484] I0315 21:25:20.484143   93033 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-service.yaml
W0315 21:25:20.530] I0315 21:25:20.530116   93033 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceAccount.yaml
W0315 21:25:20.574] I0315 21:25:20.573852   93033 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0315 21:25:20.620] I0315 21:25:20.619929   93033 prometheus.go:172] Exposing kube-apiserver metrics in kubemark cluster
W0315 21:25:20.818] I0315 21:25:20.818627   93033 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/kubemark/kube-apiserver-endpoints.yaml
W0315 21:25:20.822] F0315 21:25:20.822452   93033 clusterloader.go:203] Error while setting up prometheus stack: unmarshaling error: yaml: line 14: could not find expected ':'
W0315 21:25:20.824] goroutine 1 [running]:
W0315 21:25:20.824] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.stacks(0xc000362300, 0xc0005ba000, 0x9c, 0x1b7)
W0315 21:25:20.824] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:830 +0xb1
W0315 21:25:20.824] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).output(0x25b5be0, 0xc000000003, 0xc0005af030, 0x2528909, 0x10, 0xcb, 0x0)
W0315 21:25:20.825] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:781 +0x25e
W0315 21:25:20.825] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).printf(0x25b5be0, 0x3, 0x16175e4, 0x2b, 0xc000775d50, 0x1, 0x1)
... skipping 34 lines ...
W0315 21:25:54.450] scp: /var/log/kube-apiserver-audit.log*: No such file or directory
W0315 21:25:54.752] scp: /var/log/glbc.log*: No such file or directory
W0315 21:25:54.752] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 21:25:54.826] scp: /var/log/fluentd.log*: No such file or directory
W0315 21:25:54.826] scp: /var/log/kubelet.cov*: No such file or directory
W0315 21:25:54.826] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:25:54.924] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:25:55.010] 2019/03/15 21:25:55 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 34.022095028s
W0315 21:25:55.011] 2019/03/15 21:25:55 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0315 21:25:55.111] Skipping dumping of node logs
W0315 21:25:55.964] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 21:25:56.038] scp: /var/log/fluentd.log*: No such file or directory
W0315 21:25:56.038] scp: /var/log/kubelet.cov*: No such file or directory
W0315 21:25:56.039] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:25:56.044] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 21:25:56.144] Dumping logs from nodes locally to '/workspace/_artifacts'
I0315 21:25:56.144] Detecting nodes in the cluster
I0315 21:26:32.418] Changing logfiles to be world-readable for download
I0315 21:26:32.563] Changing logfiles to be world-readable for download
W0315 21:26:33.513] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75381-ac87c-kubemark-master].
I0315 21:26:33.721] Changing logfiles to be world-readable for download
... skipping 43 lines ...
W0315 21:26:40.889] scp: /var/log/kubelet.cov*: No such file or directory
W0315 21:26:40.966] scp: /var/log/fluentd.log*: No such file or directory
W0315 21:26:40.966] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 21:26:40.966] scp: /var/log/kubelet.cov*: No such file or directory
W0315 21:26:47.214] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75381-ac87c-kubemark-master-ip].
W0315 21:26:54.536] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:54.541] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:54.609] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:54.613] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:55.644] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:55.648] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:56.118] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:56.123] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:57.145] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:57.149] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:57.514] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:57.518] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:57.836] scp: /var/log/startupscript.log*: No such file or directory
W0315 21:26:57.840] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 21:26:59.060] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75381-ac87c-kubemark-master-https].
W0315 21:26:59.286] 2019/03/15 21:26:59 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 1m4.27585248s
W0315 21:27:01.271] INSTANCE_GROUPS=e2e-75381-ac87c-minion-group
W0315 21:27:01.272] NODE_NAMES=e2e-75381-ac87c-minion-group-0zw0 e2e-75381-ac87c-minion-group-7bs4 e2e-75381-ac87c-minion-group-7mtd e2e-75381-ac87c-minion-group-bggw e2e-75381-ac87c-minion-group-dgkb e2e-75381-ac87c-minion-group-k2wj e2e-75381-ac87c-minion-group-rm3d
I0315 21:27:02.234] Failures for e2e-75381-ac87c-minion-group
W0315 21:27:03.607] 2019/03/15 21:27:03 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m42.636467233s
... skipping 16 lines ...
I0315 21:27:31.469] Bringing down cluster
W0315 21:27:34.432] Deleting Managed Instance Group...
W0315 21:30:45.397] ........................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-75381-ac87c-minion-group].
W0315 21:30:45.397] done.
W0315 21:30:51.183] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75381-ac87c-minion-template].
W0315 21:30:57.706] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75381-ac87c-windows-node-template].
I0315 21:31:11.221] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75381-ac87c-master, port: 2379, result: 0
I0315 21:31:12.775] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75381-ac87c-master, port: 4002, result: 0
W0315 21:31:19.276] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75381-ac87c-master].
W0315 21:33:44.007] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75381-ac87c-master].
W0315 21:34:07.754] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75381-ac87c-master-https].
W0315 21:34:12.869] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75381-ac87c-master-etcd].
W0315 21:34:13.913] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75381-ac87c-minion-all].
W0315 21:34:22.664] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75381-ac87c-master-ip].
... skipping 9 lines ...
I0315 21:35:46.881] Property "users.k8s-presubmit-scale_e2e-75381-ac87c-basic-auth" unset.
I0315 21:35:47.018] Property "contexts.k8s-presubmit-scale_e2e-75381-ac87c" unset.
I0315 21:35:47.022] Cleared config for k8s-presubmit-scale_e2e-75381-ac87c from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0315 21:35:47.022] Done
W0315 21:35:47.122] 2019/03/15 21:35:47 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 8m43.416262142s
W0315 21:35:47.123] 2019/03/15 21:35:47 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0315 21:35:47.123] 2019/03/15 21:35:47 main.go:307: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255]
W0315 21:35:47.124] Traceback (most recent call last):
W0315 21:35:47.124]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0315 21:35:47.124]     main(parse_args())
W0315 21:35:47.124]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0315 21:35:47.125]     mode.start(runner_args)
W0315 21:35:47.125]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0315 21:35:47.125]     check_env(env, self.command, *args)
W0315 21:35:47.125]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0315 21:35:47.125]     subprocess.check_call(cmd, env=env)
W0315 21:35:47.126]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 21:35:47.152]     raise CalledProcessError(retcode, cmd)
W0315 21:35:47.153] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-75381-ac87c', '--gcp-network=e2e-75381-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0315 21:35:47.164] Command failed
I0315 21:35:47.164] process 734 exited with code 1 after 42.2m
E0315 21:35:47.164] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0315 21:35:47.165] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 21:35:47.660] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 21:35:47.784] process 97241 exited with code 0 after 0.0m
I0315 21:35:47.785] Call:  gcloud config get-value account
I0315 21:35:48.099] process 97253 exited with code 0 after 0.0m
I0315 21:35:48.099] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 21:35:48.099] Upload result and artifacts...
I0315 21:35:48.099] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75381/pull-kubernetes-kubemark-e2e-gce-big/41498
I0315 21:35:48.100] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75381/pull-kubernetes-kubemark-e2e-gce-big/41498/artifacts
W0315 21:35:49.222] CommandException: One or more URLs matched no objects.
E0315 21:35:49.350] Command failed
I0315 21:35:49.351] process 97265 exited with code 1 after 0.0m
W0315 21:35:49.351] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75381/pull-kubernetes-kubemark-e2e-gce-big/41498/artifacts not exist yet
I0315 21:35:49.351] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75381/pull-kubernetes-kubemark-e2e-gce-big/41498/artifacts
I0315 21:36:06.534] process 97407 exited with code 0 after 0.3m
I0315 21:36:06.535] Call:  git rev-parse HEAD
I0315 21:36:06.539] process 99609 exited with code 0 after 0.0m
... skipping 21 lines ...