This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhwdef: del unuse var in pkg/controller
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-09-16 07:05
Elapsed24m22s
Revision
Buildergke-prow-ssd-pool-1a225945-wbxk
Refs master:ba075272
82740:f2366a8c
pod34e12677-d850-11e9-8256-eafb387a1e74
infra-commite1cbc3ccd
job-versionv1.17.0-alpha.0.1442+32978ae17d564b
pod34e12677-d850-11e9-8256-eafb387a1e74
repok8s.io/kubernetes
repo-commit32978ae17d564bdbe0ebec2575bab727dc4acb8d
repos{u'k8s.io/kubernetes': u'master:ba07527278ef2cde9c27886ec3333cfef472112a,82740:f2366a8c1c7ed7590ccaaea667d7f820f214a8b2', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.17.0-alpha.0.1442+32978ae17d564b

Test Failures


Up 34s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 261 lines ...
W0916 07:16:53.666] INFO: 5219 processes: 5219 processwrapper-sandbox.
W0916 07:16:53.675] INFO: Build completed successfully, 5312 total actions
W0916 07:16:53.678] INFO: Build completed successfully, 5312 total actions
W0916 07:16:53.709] 2019/09/16 07:16:53 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 10m20.063106485s
W0916 07:16:53.710] 2019/09/16 07:16:53 util.go:255: Flushing memory.
I0916 07:16:53.810] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0916 07:17:00.130] 2019/09/16 07:17:00 util.go:265: flushMem error (page cache): exit status 1
W0916 07:17:00.131] 2019/09/16 07:17:00 process.go:153: Running: /go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce-100-performance --allow-dup
I0916 07:17:00.239] push-build.sh: BEGIN main on 34e12677-d850-11e9-8256-eafb387a1e74 Mon Sep 16 07:17:00 UTC 2019
I0916 07:17:00.239] 
W0916 07:17:00.340] $TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
W0916 07:17:01.415] Loading: 
W0916 07:17:01.417] Loading: 0 packages loaded
... skipping 691 lines ...
W0916 07:20:59.775] INSTANCE_GROUPS=e2e-82740-95a39-minion-group
W0916 07:20:59.778] NODE_NAMES=e2e-82740-95a39-minion-group-04jd e2e-82740-95a39-minion-group-0d85 e2e-82740-95a39-minion-group-0gm9 e2e-82740-95a39-minion-group-0hml e2e-82740-95a39-minion-group-0n58 e2e-82740-95a39-minion-group-1bsl e2e-82740-95a39-minion-group-1clx e2e-82740-95a39-minion-group-1jjf e2e-82740-95a39-minion-group-1z7f e2e-82740-95a39-minion-group-201p e2e-82740-95a39-minion-group-2khf e2e-82740-95a39-minion-group-2llx e2e-82740-95a39-minion-group-3p3g e2e-82740-95a39-minion-group-4dlc e2e-82740-95a39-minion-group-52wh e2e-82740-95a39-minion-group-58vh e2e-82740-95a39-minion-group-5l0x e2e-82740-95a39-minion-group-5mvk e2e-82740-95a39-minion-group-5rbr e2e-82740-95a39-minion-group-5wlr e2e-82740-95a39-minion-group-655w e2e-82740-95a39-minion-group-6kkc e2e-82740-95a39-minion-group-6x03 e2e-82740-95a39-minion-group-6xjb e2e-82740-95a39-minion-group-72hs e2e-82740-95a39-minion-group-78lf e2e-82740-95a39-minion-group-7r39 e2e-82740-95a39-minion-group-8bhh e2e-82740-95a39-minion-group-8lpf e2e-82740-95a39-minion-group-97dt e2e-82740-95a39-minion-group-9946 e2e-82740-95a39-minion-group-9gp0 e2e-82740-95a39-minion-group-9z4d e2e-82740-95a39-minion-group-b2qc e2e-82740-95a39-minion-group-b4bm e2e-82740-95a39-minion-group-bpsw e2e-82740-95a39-minion-group-c30l e2e-82740-95a39-minion-group-c7vk e2e-82740-95a39-minion-group-cn89 e2e-82740-95a39-minion-group-cw9m e2e-82740-95a39-minion-group-dl9x e2e-82740-95a39-minion-group-dltg e2e-82740-95a39-minion-group-dp62 e2e-82740-95a39-minion-group-dr8g e2e-82740-95a39-minion-group-f022 e2e-82740-95a39-minion-group-g1jg e2e-82740-95a39-minion-group-glml e2e-82740-95a39-minion-group-gx6d e2e-82740-95a39-minion-group-h6d0 e2e-82740-95a39-minion-group-hcls e2e-82740-95a39-minion-group-hh7s e2e-82740-95a39-minion-group-hw33 e2e-82740-95a39-minion-group-j011 e2e-82740-95a39-minion-group-j13k e2e-82740-95a39-minion-group-j48r e2e-82740-95a39-minion-group-j873 e2e-82740-95a39-minion-group-jhqv e2e-82740-95a39-minion-group-jmfw e2e-82740-95a39-minion-group-k296 e2e-82740-95a39-minion-group-kbl4 e2e-82740-95a39-minion-group-kddg e2e-82740-95a39-minion-group-kf8v e2e-82740-95a39-minion-group-ldxt e2e-82740-95a39-minion-group-lq48 e2e-82740-95a39-minion-group-lqv5 e2e-82740-95a39-minion-group-mlrk e2e-82740-95a39-minion-group-mm9p e2e-82740-95a39-minion-group-n24t e2e-82740-95a39-minion-group-n9vk e2e-82740-95a39-minion-group-nbgj e2e-82740-95a39-minion-group-ng9p e2e-82740-95a39-minion-group-p4f9 e2e-82740-95a39-minion-group-p5j3 e2e-82740-95a39-minion-group-plj9 e2e-82740-95a39-minion-group-pxbm e2e-82740-95a39-minion-group-pzws e2e-82740-95a39-minion-group-qxlx e2e-82740-95a39-minion-group-r7tj e2e-82740-95a39-minion-group-rfpd e2e-82740-95a39-minion-group-rgp1 e2e-82740-95a39-minion-group-sdbq e2e-82740-95a39-minion-group-sdf5 e2e-82740-95a39-minion-group-sfrg e2e-82740-95a39-minion-group-sp6p e2e-82740-95a39-minion-group-srd2 e2e-82740-95a39-minion-group-tnj2 e2e-82740-95a39-minion-group-v553 e2e-82740-95a39-minion-group-vl1n e2e-82740-95a39-minion-group-vzdt e2e-82740-95a39-minion-group-vzl4 e2e-82740-95a39-minion-group-w0tp e2e-82740-95a39-minion-group-wggl e2e-82740-95a39-minion-group-wgrj e2e-82740-95a39-minion-group-wkpw e2e-82740-95a39-minion-group-wrpx e2e-82740-95a39-minion-group-xfgk e2e-82740-95a39-minion-group-z8k6 e2e-82740-95a39-minion-group-zcbq e2e-82740-95a39-minion-group-zcrv e2e-82740-95a39-minion-group-zlnm
W0916 07:21:02.912] Deleting Managed Instance Group...
W0916 07:23:59.543] .....................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82740-95a39-minion-group].
W0916 07:23:59.543] done.
W0916 07:24:08.978] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82740-95a39-minion-template].
W0916 07:24:12.137] ERROR: (gcloud.compute.instance-templates.delete) Could not fetch resource:
W0916 07:24:12.138]  - The resource 'projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82740-95a39-windows-node-template' was not found
W0916 07:24:12.138] 
W0916 07:24:18.528] Warning: Permanently added 'compute.2633908242879779737' (ED25519) to the list of known hosts.
I0916 07:24:18.994] Removing etcd replica, name: e2e-82740-95a39-master, port: 2379, result: 52
I0916 07:24:20.606] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-82740-95a39-master, port: 4002, result: 0
W0916 07:24:27.385] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-95a39-master].
W0916 07:26:58.139] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-95a39-master].
W0916 07:27:05.476] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0916 07:27:05.476]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-master-etcd' is not ready
W0916 07:27:05.476] 
W0916 07:27:25.455] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-master-https].
W0916 07:27:27.620] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-minion-all].
W0916 07:27:29.938] ERROR: (gcloud.compute.addresses.delete) Could not fetch resource:
W0916 07:27:29.939]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82740-95a39-master-ip' is not ready
W0916 07:27:29.939] 
W0916 07:27:53.569] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-default-internal-master].
W0916 07:27:54.729] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-default-internal-node].
W0916 07:27:55.260] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-95a39-default-ssh].
I0916 07:27:56.482] Deleting firewall rules remaining in network e2e-82740-95a39: 
I0916 07:27:57.356] Deleting custom subnet...
W0916 07:27:58.991] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0916 07:27:58.991]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82740-95a39-custom-subnet' is not ready
W0916 07:27:58.992] 
W0916 07:28:07.954] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0916 07:28:07.955]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-95a39' is already being used by 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82740-95a39-custom-subnet'
W0916 07:28:07.955] 
I0916 07:28:08.055] Failed to delete network 'e2e-82740-95a39'. Listing firewall-rules:
W0916 07:28:08.910] 
W0916 07:28:08.911] To show all fields of the firewall, please show in JSON format: --format=json
W0916 07:28:08.911] To show all fields in table format, please see the examples in --help.
W0916 07:28:08.911] 
W0916 07:28:09.188] W0916 07:28:09.188647   73473 loader.go:223] Config not found: /workspace/.kube/config
I0916 07:28:09.366] Property "clusters.k8s-presubmit-scale_e2e-82740-95a39" unset.
... skipping 25 lines ...
W0916 07:28:10.675] Zone: us-east1-b
I0916 07:28:20.852] +++ Staging tars to Google Storage: gs://kubernetes-staging-141a37ea6d/e2e-82740-95a39-devel
I0916 07:28:35.870] +++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 72be661610e12d0df2ec556dae3ce9d2726dad76)
I0916 07:28:38.164] +++ kubernetes-manifests.tar.gz uploaded earlier, cloud and local file md5 match (md5 = 16c0cb40be44aafcb845d4e024a8c9eb)
I0916 07:28:39.956] Found existing network e2e-82740-95a39 in CUSTOM mode.
W0916 07:28:41.452] Creating firewall...
W0916 07:28:41.769] failed.
W0916 07:28:41.774] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0916 07:28:41.774]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-95a39' is not ready
W0916 07:28:41.775] 
W0916 07:28:42.306] Creating firewall...
I0916 07:28:42.613] IP aliases are enabled. Creating subnetworks.
W0916 07:28:42.713] failed.
W0916 07:28:42.714] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0916 07:28:42.714]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-95a39' is not ready
W0916 07:28:42.714] 
W0916 07:28:43.190] Creating firewall...
I0916 07:28:43.470] Creating subnet e2e-82740-95a39:e2e-82740-95a39-custom-subnet
W0916 07:28:43.629] failed.
W0916 07:28:43.629] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0916 07:28:43.629]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-95a39' is not ready
W0916 07:28:43.630] 
W0916 07:28:44.257] ERROR: (gcloud.compute.networks.subnets.create) Could not fetch resource:
W0916 07:28:44.258]  - Internal error. Please try again or contact Google Support. (Code: '-6602010725252967849')
W0916 07:28:44.258] 
W0916 07:28:44.335] 2019/09/16 07:28:44 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 34.395634237s
W0916 07:28:44.335] 2019/09/16 07:28:44 e2e.go:519: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts
W0916 07:28:44.336] 2019/09/16 07:28:44 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts
W0916 07:28:44.425] Trying to find master named 'e2e-82740-95a39-master'
W0916 07:28:44.425] Looking for address 'e2e-82740-95a39-master-ip'
I0916 07:28:44.526] Checking for custom logdump instances, if any
I0916 07:28:44.527] Sourcing kube-util.sh
I0916 07:28:44.527] Detecting project
I0916 07:28:44.527] Project: k8s-presubmit-scale
I0916 07:28:44.527] Network Project: k8s-presubmit-scale
I0916 07:28:44.528] Zone: us-east1-b
I0916 07:28:44.528] Dumping logs from master locally to '/workspace/_artifacts'
W0916 07:28:45.155] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0916 07:28:45.155]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82740-95a39-master-ip' was not found
W0916 07:28:45.155] 
W0916 07:28:45.252] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0916 07:28:45.353] Master not detected. Is the cluster up?
I0916 07:28:45.354] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts' using logexporter
I0916 07:28:45.354] Detecting nodes in the cluster
... skipping 32 lines ...
I0916 07:29:24.429] Cleared config for k8s-presubmit-scale_e2e-82740-95a39 from /workspace/.kube/config
I0916 07:29:24.429] Done
W0916 07:29:24.445] W0916 07:29:24.424571   76648 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:29:24.445] W0916 07:29:24.424851   76648 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:29:24.445] 2019/09/16 07:29:24 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 33.685692058s
W0916 07:29:24.446] 2019/09/16 07:29:24 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0916 07:29:24.446] 2019/09/16 07:29:24 main.go:319: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0916 07:29:24.446] Traceback (most recent call last):
W0916 07:29:24.446]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0916 07:29:24.446]     main(parse_args())
W0916 07:29:24.447]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0916 07:29:24.447]     mode.start(runner_args)
W0916 07:29:24.447]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0916 07:29:24.447]     check_env(env, self.command, *args)
W0916 07:29:24.447]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0916 07:29:24.447]     subprocess.check_call(cmd, env=env)
W0916 07:29:24.447]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0916 07:29:24.447]     raise CalledProcessError(retcode, cmd)
W0916 07:29:24.449] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce-100-performance', '--up', '--down', '--provider=gce', '--cluster=e2e-82740-95a39', '--gcp-network=e2e-82740-95a39', '--extract=local', '--gcp-nodes=100', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=100', '--test-cmd-args=--prometheus-scrape-etcd', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/density/100_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m', '--logexporter-gcs-path=gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts')' returned non-zero exit status 1
E0916 07:29:24.449] Command failed
I0916 07:29:24.449] process 523 exited with code 1 after 22.9m
E0916 07:29:24.449] FAIL: pull-kubernetes-e2e-gce-100-performance
I0916 07:29:24.450] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0916 07:29:24.970] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0916 07:29:25.041] process 76657 exited with code 0 after 0.0m
I0916 07:29:25.042] Call:  gcloud config get-value account
I0916 07:29:25.350] process 76669 exited with code 0 after 0.0m
I0916 07:29:25.351] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0916 07:29:25.351] Upload result and artifacts...
I0916 07:29:25.351] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672
I0916 07:29:25.351] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts
W0916 07:29:26.397] CommandException: One or more URLs matched no objects.
E0916 07:29:26.542] Command failed
I0916 07:29:26.543] process 76681 exited with code 1 after 0.0m
W0916 07:29:26.543] Remote dir gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts not exist yet
I0916 07:29:26.544] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-e2e-gce-100-performance/1173492967801884672/artifacts
I0916 07:29:28.353] process 76823 exited with code 0 after 0.0m
I0916 07:29:28.354] Call:  git rev-parse HEAD
I0916 07:29:28.360] process 77347 exited with code 0 after 0.0m
... skipping 20 lines ...