This job view page is being replaced by Spyglass soon. Check out the new job view.
PRJacobTanenbaum: Clear conntrack entries on 0 -> 1 endpoint transition with externalIPs
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2019-03-15 18:39
Elapsed46m54s
Revision
Buildergke-prow-containerd-pool-99179761-j2kg
Refs master:b0494b08
75265:c3548165
poda8170311-4751-11e9-be52-0a580a6c0982
infra-commit8f48691ab
job-versionv1.15.0-alpha.0.1228+9ad08cb68d825b
poda8170311-4751-11e9-be52-0a580a6c0982
repok8s.io/kubernetes
repo-commit9ad08cb68d825bf6dbbc5a37151240b3a46286cf
repos{u'k8s.io/kubernetes': u'master:b0494b081d5c97c21115cd2921f7c5b536470591,75265:c3548165d5dacd25a12896bacd0f8b6f71c55510', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1228+9ad08cb68d825b

Test Failures


ClusterLoaderV2 1m4s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 1168 lines ...
W0315 19:08:48.953] Trying to find master named 'e2e-75265-ac87c-master'
W0315 19:08:48.954] Looking for address 'e2e-75265-ac87c-master-ip'
W0315 19:08:49.777] Using master: e2e-75265-ac87c-master (external IP: 104.196.111.150)
I0315 19:08:49.877] Waiting up to 300 seconds for cluster initialization.
I0315 19:08:49.878] 
I0315 19:08:49.878]   This will continually check to see if the API for kubernetes is reachable.
I0315 19:08:49.878]   This may time out if there was some uncaught error during start up.
I0315 19:08:49.878] 
I0315 19:09:10.174] .....Kubernetes cluster created.
I0315 19:09:10.327] Cluster "k8s-presubmit-scale_e2e-75265-ac87c" set.
I0315 19:09:10.483] User "k8s-presubmit-scale_e2e-75265-ac87c" set.
I0315 19:09:10.635] Context "k8s-presubmit-scale_e2e-75265-ac87c" created.
I0315 19:09:10.785] Switched to context "k8s-presubmit-scale_e2e-75265-ac87c".
... skipping 23 lines ...
I0315 19:09:46.974] e2e-75265-ac87c-minion-group-9ldt   Ready                      <none>   11s   v1.15.0-alpha.0.1228+9ad08cb68d825b
I0315 19:09:46.974] e2e-75265-ac87c-minion-group-9xzm   Ready                      <none>   7s    v1.15.0-alpha.0.1228+9ad08cb68d825b
I0315 19:09:46.974] e2e-75265-ac87c-minion-group-sdql   Ready                      <none>   11s   v1.15.0-alpha.0.1228+9ad08cb68d825b
I0315 19:09:46.974] e2e-75265-ac87c-minion-group-tgkb   Ready                      <none>   11s   v1.15.0-alpha.0.1228+9ad08cb68d825b
I0315 19:09:46.974] e2e-75265-ac87c-minion-group-v13g   Ready                      <none>   12s   v1.15.0-alpha.0.1228+9ad08cb68d825b
I0315 19:09:47.327] Validate output:
I0315 19:09:47.670] NAME                 STATUS    MESSAGE             ERROR
I0315 19:09:47.671] scheduler            Healthy   ok                  
I0315 19:09:47.671] etcd-0               Healthy   {"health":"true"}   
I0315 19:09:47.671] controller-manager   Healthy   ok                  
I0315 19:09:47.671] etcd-1               Healthy   {"health":"true"}   
I0315 19:09:47.677] Cluster validation succeeded
W0315 19:09:47.777] Done, listing cluster services:
... skipping 60 lines ...
W0315 19:10:24.562] 2019/03/15 19:10:24 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 4m29.098976331s
W0315 19:10:24.563] 2019/03/15 19:10:24 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W0315 19:10:24.909] 2019/03/15 19:10:24 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 373.735314ms
W0315 19:10:24.910] 2019/03/15 19:10:24 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml
W0315 19:10:25.322] 2019/03/15 19:10:25 process.go:155: Step './cluster/kubectl.sh --match-server-version=false get nodes -oyaml' finished in 412.952526ms
W0315 19:10:25.323] 2019/03/15 19:10:25 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 19:10:26.914] ERROR: (gcloud.compute.instances.delete) Could not fetch resource:
W0315 19:10:26.914]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75265-ac87c-kubemark-master' was not found
W0315 19:10:26.915] 
W0315 19:10:27.932] ERROR: (gcloud.compute.disks.delete) Could not fetch resource:
W0315 19:10:27.932]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-75265-ac87c-kubemark-master-pd' was not found
W0315 19:10:27.932] 
W0315 19:10:29.837] ERROR: (gcloud.compute.addresses.delete) Could not fetch resource:
W0315 19:10:29.838]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75265-ac87c-kubemark-master-ip' was not found
W0315 19:10:29.838] 
W0315 19:10:30.678] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0315 19:10:30.679]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-75265-ac87c-kubemark-master-https' was not found
W0315 19:10:30.679] 
W0315 19:10:30.754] 2019/03/15 19:10:30 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 5.431381046s
W0315 19:10:30.754] 2019/03/15 19:10:30 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0315 19:10:30.816] Project: k8s-presubmit-scale
W0315 19:10:30.816] Network Project: k8s-presubmit-scale
W0315 19:10:30.816] Zone: us-east1-b
I0315 19:10:31.353] Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1228+9ad08cb68d825b", GitCommit:"9ad08cb68d825bf6dbbc5a37151240b3a46286cf", GitTreeState:"clean", BuildDate:"2019-03-15T16:10:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
I0315 19:10:31.354] Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1228+9ad08cb68d825b", GitCommit:"9ad08cb68d825bf6dbbc5a37151240b3a46286cf", GitTreeState:"clean", BuildDate:"2019-03-15T16:10:58Z", GoVersion:"go1.12", Compiler:"gc", Platform:"linux/amd64"}
W0315 19:10:31.455] 2019/03/15 19:10:31 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 605.916881ms
W0315 19:10:31.455] 2019/03/15 19:10:31 process.go:153: Running: ./test/kubemark/start-kubemark.sh
W0315 19:10:32.296] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0315 19:10:32.297]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75265-ac87c-kubemark-master-ip' was not found
W0315 19:10:32.297] 
I0315 19:10:40.644] Created [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75265-ac87c-kubemark-master-ip].
I0315 19:10:41.030] Succeeded to gcloud compute addresses.
I0315 19:10:41.897] Generating certs for alternate-names: IP:35.231.235.219,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-75265-ac87c-kubemark-master
I0315 19:10:43.659] Generated PKI authentication data for kubemark.
... skipping 660 lines ...
W0315 19:16:19.794] I0315 19:16:19.794051   89881 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-roleSpecificNamespaces.yaml
W0315 19:16:19.926] I0315 19:16:19.925890   89881 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-service.yaml
W0315 19:16:19.974] I0315 19:16:19.974561   89881 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceAccount.yaml
W0315 19:16:20.018] I0315 19:16:20.018002   89881 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0315 19:16:20.062] I0315 19:16:20.062633   89881 prometheus.go:172] Exposing kube-apiserver metrics in kubemark cluster
W0315 19:16:20.269] I0315 19:16:20.269487   89881 prometheus.go:145] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/kubemark/kube-apiserver-endpoints.yaml
W0315 19:16:20.272] F0315 19:16:20.272625   89881 clusterloader.go:203] Error while setting up prometheus stack: unmarshaling error: yaml: line 13: could not find expected ':'
W0315 19:16:20.274] goroutine 1 [running]:
W0315 19:16:20.274] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.stacks(0xc000818300, 0xc000822000, 0x9c, 0x1b7)
W0315 19:16:20.274] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:830 +0xb1
W0315 19:16:20.275] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).output(0x25afbe0, 0xc000000003, 0xc00046c310, 0x2522d3d, 0x10, 0xcb, 0x0)
W0315 19:16:20.275] 	/go/src/k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog/klog.go:781 +0x25f
W0315 19:16:20.275] k8s.io/perf-tests/clusterloader2/vendor/k8s.io/klog.(*loggingT).printf(0x25afbe0, 0x3, 0x16125e2, 0x2b, 0xc000a7dd50, 0x1, 0x1)
... skipping 34 lines ...
W0315 19:16:54.748] scp: /var/log/kube-apiserver-audit.log*: No such file or directory
W0315 19:16:55.053] scp: /var/log/glbc.log*: No such file or directory
W0315 19:16:55.054] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 19:16:55.129] scp: /var/log/fluentd.log*: No such file or directory
W0315 19:16:55.129] scp: /var/log/kubelet.cov*: No such file or directory
W0315 19:16:55.129] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:16:55.133] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 19:16:55.238] Skipping dumping of node logs
W0315 19:16:55.339] 2019/03/15 19:16:55 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 34.723850602s
W0315 19:16:55.339] 2019/03/15 19:16:55 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
W0315 19:16:55.879] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0315 19:16:55.956] scp: /var/log/fluentd.log*: No such file or directory
W0315 19:16:55.956] scp: /var/log/kubelet.cov*: No such file or directory
W0315 19:16:55.956] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:16:55.962] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0315 19:16:56.062] Dumping logs from nodes locally to '/workspace/_artifacts'
I0315 19:16:56.063] Detecting nodes in the cluster
I0315 19:17:31.818] Changing logfiles to be world-readable for download
I0315 19:17:31.872] Changing logfiles to be world-readable for download
I0315 19:17:32.476] Changing logfiles to be world-readable for download
I0315 19:17:32.688] Changing logfiles to be world-readable for download
... skipping 40 lines ...
W0315 19:17:39.684] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 19:17:39.684] scp: /var/log/kubelet.cov*: No such file or directory
W0315 19:17:39.815] scp: /var/log/fluentd.log*: No such file or directory
W0315 19:17:39.816] scp: /var/log/node-problem-detector.log*: No such file or directory
W0315 19:17:39.816] scp: /var/log/kubelet.cov*: No such file or directory
W0315 19:17:52.001] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:52.006] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:52.441] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:52.446] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:52.626] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:52.631] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:57.127] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:57.132] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:57.333] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:57.338] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:57.622] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:57.628] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:58.449] scp: /var/log/startupscript.log*: No such file or directory
W0315 19:17:58.453] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0315 19:17:59.313] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75265-ac87c-kubemark-master].
W0315 19:18:01.532] INSTANCE_GROUPS=e2e-75265-ac87c-minion-group
W0315 19:18:01.533] NODE_NAMES=e2e-75265-ac87c-minion-group-22k5 e2e-75265-ac87c-minion-group-5d7r e2e-75265-ac87c-minion-group-9ldt e2e-75265-ac87c-minion-group-9xzm e2e-75265-ac87c-minion-group-sdql e2e-75265-ac87c-minion-group-tgkb e2e-75265-ac87c-minion-group-v13g
I0315 19:18:02.518] Failures for e2e-75265-ac87c-minion-group
W0315 19:18:03.705] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-75265-ac87c-kubemark-master-pd].
W0315 19:18:03.766] 2019/03/15 19:18:03 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m43.250118271s
... skipping 19 lines ...
W0315 19:18:35.348] .Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75265-ac87c-kubemark-master-https].
W0315 19:18:35.576] 2019/03/15 19:18:35 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 1m40.336024014s
W0315 19:21:35.094] .....................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-75265-ac87c-minion-group].
W0315 19:21:35.095] done.
W0315 19:21:44.454] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75265-ac87c-minion-template].
W0315 19:21:50.517] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75265-ac87c-windows-node-template].
I0315 19:22:01.007] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75265-ac87c-master, port: 2379, result: 0
I0315 19:22:02.587] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75265-ac87c-master, port: 4002, result: 0
W0315 19:22:08.955] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75265-ac87c-master].
W0315 19:24:07.880] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75265-ac87c-master].
W0315 19:24:31.886] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75265-ac87c-master-https].
W0315 19:24:33.267] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75265-ac87c-minion-all].
W0315 19:24:37.925] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75265-ac87c-master-etcd].
W0315 19:24:47.252] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75265-ac87c-master-ip].
... skipping 9 lines ...
I0315 19:26:22.756] Property "users.k8s-presubmit-scale_e2e-75265-ac87c-basic-auth" unset.
I0315 19:26:22.912] Property "contexts.k8s-presubmit-scale_e2e-75265-ac87c" unset.
I0315 19:26:22.917] Cleared config for k8s-presubmit-scale_e2e-75265-ac87c from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0315 19:26:22.917] Done
W0315 19:26:22.968] 2019/03/15 19:26:22 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 8m19.153552704s
W0315 19:26:22.968] 2019/03/15 19:26:22 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0315 19:26:22.969] 2019/03/15 19:26:22 main.go:307: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=500 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/load/kubemark/500_nodes/override.yaml: exit status 255]
W0315 19:26:22.969] Traceback (most recent call last):
W0315 19:26:22.969]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0315 19:26:22.969]     main(parse_args())
W0315 19:26:22.969]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0315 19:26:22.969]     mode.start(runner_args)
W0315 19:26:22.969]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0315 19:26:22.970]     check_env(env, self.command, *args)
W0315 19:26:22.970]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0315 19:26:22.970]     subprocess.check_call(cmd, env=env)
W0315 19:26:22.970]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 19:26:22.970]     raise CalledProcessError(retcode, cmd)
W0315 19:26:22.971] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-75265-ac87c', '--gcp-network=e2e-75265-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0315 19:26:22.971] Command failed
I0315 19:26:22.971] process 706 exited with code 1 after 42.4m
E0315 19:26:22.972] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0315 19:26:22.972] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 19:26:23.487] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 19:26:23.595] process 94089 exited with code 0 after 0.0m
I0315 19:26:23.595] Call:  gcloud config get-value account
I0315 19:26:23.915] process 94101 exited with code 0 after 0.0m
I0315 19:26:23.915] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 19:26:23.915] Upload result and artifacts...
I0315 19:26:23.915] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75265/pull-kubernetes-kubemark-e2e-gce-big/41483
I0315 19:26:23.916] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75265/pull-kubernetes-kubemark-e2e-gce-big/41483/artifacts
W0315 19:26:25.092] CommandException: One or more URLs matched no objects.
E0315 19:26:25.253] Command failed
I0315 19:26:25.253] process 94113 exited with code 1 after 0.0m
W0315 19:26:25.253] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75265/pull-kubernetes-kubemark-e2e-gce-big/41483/artifacts not exist yet
I0315 19:26:25.254] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75265/pull-kubernetes-kubemark-e2e-gce-big/41483/artifacts
I0315 19:26:41.709] process 94255 exited with code 0 after 0.3m
I0315 19:26:41.710] Call:  git rev-parse HEAD
I0315 19:26:41.715] process 96457 exited with code 0 after 0.0m
... skipping 21 lines ...