This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-09-17 16:03
Elapsed28m28s
Revision
Buildergke-prow-ssd-pool-1a225945-6r1w
Refs master:3a19f1e8
82703:d19ce5d9
pod7723afe8-d964-11e9-ab16-168922f32233
infra-commit0584b6f9c
job-versionv1.17.0-alpha.0.1494+e91716ca3befe6
pod7723afe8-d964-11e9-ab16-168922f32233
repok8s.io/kubernetes
repo-commite91716ca3befe6ce091ff2ab5bcd956108dedc50
repos{u'k8s.io/kubernetes': u'master:3a19f1e80b172dfceb06ffe654b1e349bac53f73,82703:d19ce5d981f07c770d9d74570cd3cc9de1c724df', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.17.0-alpha.0.1494+e91716ca3befe6

Test Failures


Up 32s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 1002 lines ...
I0917 16:21:10.897] e2e-82703-ac87c-master-https
I0917 16:21:10.897] e2e-82703-ac87c-minion-all
W0917 16:21:33.939] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-master-etcd].
W0917 16:21:34.809] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-master-https].
W0917 16:21:35.813] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-minion-all].
I0917 16:21:36.752] Deleting custom subnet...
W0917 16:21:37.947] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0917 16:21:37.948]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82703-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82703-ac87c-master'
W0917 16:21:37.948] 
W0917 16:21:43.451] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0917 16:21:43.451]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-82703-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82703-ac87c-minion-group'
W0917 16:21:43.451] 
I0917 16:21:43.552] Failed to delete network 'e2e-82703-ac87c'. Listing firewall-rules:
W0917 16:21:44.477] 
W0917 16:21:44.478] To show all fields of the firewall, please show in JSON format: --format=json
W0917 16:21:44.478] To show all fields in table format, please see the examples in --help.
W0917 16:21:44.478] 
W0917 16:21:44.701] W0917 16:21:44.700789   78668 loader.go:223] Config not found: /workspace/.kube/config
W0917 16:21:44.844] W0917 16:21:44.844727   78714 loader.go:223] Config not found: /workspace/.kube/config
... skipping 32 lines ...
W0917 16:22:15.625] .Creating firewall...
W0917 16:22:16.681] ...Creating firewall...
I0917 16:22:16.782] IP aliases are enabled. Creating subnetworks.
I0917 16:22:16.782] Using subnet e2e-82703-ac87c-custom-subnet
I0917 16:22:16.782] Starting master and configuring firewalls
I0917 16:22:16.782] Configuring firewall for apiserver konnectivity server
W0917 16:22:17.426] .ERROR: (gcloud.compute.disks.create) Could not fetch resource:
W0917 16:22:17.427]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-82703-ac87c-master-pd' already exists
W0917 16:22:17.427] 
W0917 16:22:17.508] 2019/09/17 16:22:17 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 32.206555335s
W0917 16:22:17.508] 2019/09/17 16:22:17 e2e.go:522: Dumping logs locally to: /workspace/_artifacts
W0917 16:22:17.508] 2019/09/17 16:22:17 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0917 16:22:17.573] Trying to find master named 'e2e-82703-ac87c-master'
... skipping 27 lines ...
W0917 16:23:03.506] 
W0917 16:23:03.507] Specify --start=43122 in the next get-serial-port-output invocation to get only the new output starting from here.
W0917 16:23:05.877] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0917 16:23:05.945] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:05.945] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:05.945] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:05.949] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0917 16:23:06.049] Dumping logs from nodes locally to '/workspace/_artifacts'
I0917 16:23:06.050] Detecting nodes in the cluster
I0917 16:23:50.246] Changing logfiles to be world-readable for download
I0917 16:23:50.290] Changing logfiles to be world-readable for download
I0917 16:23:50.669] Changing logfiles to be world-readable for download
I0917 16:23:50.746] Changing logfiles to be world-readable for download
... skipping 24 lines ...
W0917 16:23:57.104] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.104] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.104] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.104] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.104] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.104] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.111] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:57.180] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:23:57.180] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:57.181] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.181] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.181] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.181] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.181] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.181] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.187] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:57.197] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.198] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.198] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.202] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:57.203] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:23:57.204] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:57.204] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.204] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.204] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.205] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.205] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.205] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.211] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:57.333] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:23:57.333] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:57.333] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.333] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.333] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.334] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.334] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.334] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.345] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:57.412] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:23:57.412] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:23:57.412] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:23:57.412] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:23:57.412] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:23:57.413] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:23:57.413] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:23:57.413] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:23:57.421] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:23:58.638] 
W0917 16:23:58.638] Specify --start=39801 in the next get-serial-port-output invocation to get only the new output starting from here.
W0917 16:24:00.157] scp: /var/log/kube-proxy.log*: No such file or directory
W0917 16:24:00.158] scp: /var/log/fluentd.log*: No such file or directory
W0917 16:24:00.158] scp: /var/log/node-problem-detector.log*: No such file or directory
W0917 16:24:00.158] scp: /var/log/kubelet.cov*: No such file or directory
W0917 16:24:00.159] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0917 16:24:00.159] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0917 16:24:00.159] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0917 16:24:00.159] scp: /var/log/startupscript.log*: No such file or directory
W0917 16:24:00.163] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0917 16:24:03.663] INSTANCE_GROUPS=e2e-82703-ac87c-minion-group
W0917 16:24:03.664] NODE_NAMES=e2e-82703-ac87c-minion-group-0s14 e2e-82703-ac87c-minion-group-1n69 e2e-82703-ac87c-minion-group-6tqr e2e-82703-ac87c-minion-group-fdv8 e2e-82703-ac87c-minion-group-h3sz e2e-82703-ac87c-minion-group-p8hk e2e-82703-ac87c-minion-group-s1n3
I0917 16:24:04.573] Failures for e2e-82703-ac87c-minion-group (if any):
W0917 16:24:05.600] 2019/09/17 16:24:05 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m48.092496997s
W0917 16:24:05.601] 2019/09/17 16:24:05 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0917 16:24:05.655] Project: k8s-presubmit-scale
... skipping 13 lines ...
W0917 16:24:14.279] Deleting Managed Instance Group...
W0917 16:26:09.774] .........................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82703-ac87c-minion-group].
W0917 16:26:09.774] done.
W0917 16:26:15.829] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82703-ac87c-minion-template].
W0917 16:26:22.244] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82703-ac87c-windows-node-template].
I0917 16:26:31.143] Removing etcd replica, name: e2e-82703-ac87c-master, port: 2379, result: 52
I0917 16:26:32.687] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-82703-ac87c-master, port: 4002, result: 0
W0917 16:26:39.459] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82703-ac87c-master].
W0917 16:29:06.450] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82703-ac87c-master].
W0917 16:29:30.630] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-master-https].
W0917 16:29:39.447] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82703-ac87c-master-ip].
W0917 16:30:05.126] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-default-internal-master].
W0917 16:30:11.178] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82703-ac87c-default-internal-node].
... skipping 16 lines ...
I0917 16:31:21.315] Cleared config for k8s-presubmit-scale_e2e-82703-ac87c from /workspace/.kube/config
I0917 16:31:21.316] Done
W0917 16:31:21.366] W0917 16:31:21.311154   84694 loader.go:223] Config not found: /workspace/.kube/config
W0917 16:31:21.366] W0917 16:31:21.311365   84694 loader.go:223] Config not found: /workspace/.kube/config
W0917 16:31:21.366] 2019/09/17 16:31:21 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m15.717209858s
W0917 16:31:21.367] 2019/09/17 16:31:21 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0917 16:31:21.367] 2019/09/17 16:31:21 main.go:319: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0917 16:31:21.367] Traceback (most recent call last):
W0917 16:31:21.367]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0917 16:31:21.367]     main(parse_args())
W0917 16:31:21.368]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0917 16:31:21.368]     mode.start(runner_args)
W0917 16:31:21.368]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0917 16:31:21.368]     check_env(env, self.command, *args)
W0917 16:31:21.368]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0917 16:31:21.368]     subprocess.check_call(cmd, env=env)
W0917 16:31:21.368]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0917 16:31:21.368]     raise CalledProcessError(retcode, cmd)
W0917 16:31:21.370] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-82703-ac87c', '--gcp-network=e2e-82703-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0917 16:31:21.370] Command failed
I0917 16:31:21.370] process 709 exited with code 1 after 27.0m
E0917 16:31:21.370] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0917 16:31:21.370] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0917 16:31:21.860] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0917 16:31:21.912] process 84704 exited with code 0 after 0.0m
I0917 16:31:21.912] Call:  gcloud config get-value account
I0917 16:31:22.213] process 84716 exited with code 0 after 0.0m
I0917 16:31:22.214] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0917 16:31:22.214] Upload result and artifacts...
I0917 16:31:22.214] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/82703/pull-kubernetes-kubemark-e2e-gce-big/1173990637226168320
I0917 16:31:22.215] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/82703/pull-kubernetes-kubemark-e2e-gce-big/1173990637226168320/artifacts
W0917 16:31:23.186] CommandException: One or more URLs matched no objects.
E0917 16:31:23.293] Command failed
I0917 16:31:23.294] process 84728 exited with code 1 after 0.0m
W0917 16:31:23.294] Remote dir gs://kubernetes-jenkins/pr-logs/pull/82703/pull-kubernetes-kubemark-e2e-gce-big/1173990637226168320/artifacts not exist yet
I0917 16:31:23.294] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82703/pull-kubernetes-kubemark-e2e-gce-big/1173990637226168320/artifacts
I0917 16:31:25.996] process 84870 exited with code 0 after 0.0m
I0917 16:31:25.997] Call:  git rev-parse HEAD
I0917 16:31:26.001] process 85557 exited with code 0 after 0.0m
... skipping 21 lines ...