This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhwdef: del unuse var in pkg/controller
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-09-16 07:06
Elapsed29m44s
Revision
Buildergke-prow-ssd-pool-1a225945-810g
Refs master:ba075272
82740:851eac6a
pod689b4a0a-d850-11e9-8256-eafb387a1e74
infra-commite1cbc3ccd
job-versionv1.17.0-alpha.0.1442+24e22bbb1a9dc0
pod689b4a0a-d850-11e9-8256-eafb387a1e74
repok8s.io/kubernetes
repo-commit24e22bbb1a9dc0044141a619baced48200173db4
repos{u'k8s.io/kubernetes': u'master:ba07527278ef2cde9c27886ec3333cfef472112a,82740:851eac6a979eeb80746ed49d126646e0957712f0', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.17.0-alpha.0.1442+24e22bbb1a9dc0

Test Failures


Up 33s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 974 lines ...
W0916 07:23:58.931] Zone: us-east1-b
I0916 07:24:02.528] Bringing down cluster
W0916 07:24:02.628] INSTANCE_GROUPS=e2e-82740-ac87c-minion-group
W0916 07:24:02.629] NODE_NAMES=e2e-82740-ac87c-minion-group-5hbp e2e-82740-ac87c-minion-group-ccm9 e2e-82740-ac87c-minion-group-v1t3
W0916 07:24:05.665] Deleting Managed Instance Group...
W0916 07:24:06.019] done.
W0916 07:24:06.023] ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
W0916 07:24:06.023]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82740-ac87c-minion-group' is not ready
W0916 07:24:06.023] 
W0916 07:24:06.128] Failed to delete instance group(s).
W0916 07:24:08.477] ERROR: (gcloud.compute.instance-templates.delete) Could not fetch resource:
W0916 07:24:08.477]  - The instance_template resource 'projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82740-ac87c-minion-template' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-82740-ac87c-minion-group'
W0916 07:24:08.477] 
W0916 07:24:18.455] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-82740-ac87c-windows-node-template].
W0916 07:24:25.036] Warning: Permanently added 'compute.8127444866441580708' (ED25519) to the list of known hosts.
I0916 07:24:25.515] Removing etcd replica, name: e2e-82740-ac87c-master, port: 2379, result: 52
I0916 07:24:27.080] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-82740-ac87c-master, port: 4002, result: 0
W0916 07:24:33.769] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-master].
W0916 07:27:04.640] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-master].
W0916 07:27:28.650] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-minion-all].
W0916 07:27:31.822] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-master-https].
W0916 07:27:32.907] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-master-etcd].
W0916 07:27:40.623] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0916 07:27:40.624]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-internal-master' was not found
W0916 07:27:40.624] 
W0916 07:27:41.671] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0916 07:27:41.671]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-internal-node' is not ready
W0916 07:27:41.671] 
W0916 07:27:42.575] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0916 07:27:42.575]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-ssh' is not ready
W0916 07:27:42.575] 
W0916 07:27:43.142] Failed to delete firewall rules.
I0916 07:27:44.254] Deleting firewall rules remaining in network e2e-82740-ac87c: e2e-82740-ac87c-default-internal-node
I0916 07:27:44.255] e2e-82740-ac87c-default-ssh
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-default-internal-master
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-default-internal-node
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-master-etcd
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-master-https
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-minion-all
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-minion-http-alt
I0916 07:27:44.255] e2e-82740-ac87c-kubemark-minion-nodeports
W0916 07:27:46.585] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0916 07:27:46.585]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-ssh' was not found
W0916 07:27:46.585] 
W0916 07:28:07.968] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-minion-http-alt].
W0916 07:28:08.971] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-default-internal-master].
W0916 07:28:10.865] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-master-etcd].
W0916 07:28:12.310] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-minion-all].
W0916 07:28:13.961] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-minion-nodeports].
W0916 07:28:15.156] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-default-internal-node].
W0916 07:28:16.719] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-kubemark-master-https].
I0916 07:28:17.674] Deleting custom subnet...
W0916 07:28:18.794] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0916 07:28:18.795]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82740-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-kubemark-master'
W0916 07:28:18.795] 
W0916 07:28:27.662] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0916 07:28:27.663]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-kubemark-master'
W0916 07:28:27.663] 
I0916 07:28:27.763] Failed to delete network 'e2e-82740-ac87c'. Listing firewall-rules:
W0916 07:28:28.508] 
W0916 07:28:28.508] To show all fields of the firewall, please show in JSON format: --format=json
W0916 07:28:28.508] To show all fields in table format, please see the examples in --help.
W0916 07:28:28.508] 
W0916 07:28:28.745] W0916 07:28:28.745425   79722 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:28:28.897] W0916 07:28:28.896494   79769 loader.go:223] Config not found: /workspace/.kube/config
... skipping 29 lines ...
I0916 07:28:58.217] Found existing network e2e-82740-ac87c in CUSTOM mode.
I0916 07:29:00.650] IP aliases are enabled. Creating subnetworks.
I0916 07:29:01.559] Using subnet e2e-82740-ac87c-custom-subnet
I0916 07:29:01.564] Starting master and configuring firewalls
I0916 07:29:01.564] Configuring firewall for apiserver konnectivity server
W0916 07:29:02.134] Creating firewall...
W0916 07:29:02.526] failed.
W0916 07:29:02.530] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0916 07:29:02.530]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-master-https' already exists
W0916 07:29:02.530] 
W0916 07:29:02.890] ERROR: (gcloud.compute.disks.create) Could not fetch resource:
W0916 07:29:02.890]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-82740-ac87c-master-pd' already exists
W0916 07:29:02.891] 
W0916 07:29:02.967] 2019/09/16 07:29:02 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 33.587054439s
W0916 07:29:02.967] 2019/09/16 07:29:02 e2e.go:522: Dumping logs locally to: /workspace/_artifacts
W0916 07:29:02.968] 2019/09/16 07:29:02 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0916 07:29:03.035] Trying to find master named 'e2e-82740-ac87c-master'
... skipping 19 lines ...
W0916 07:29:57.265] scp: /var/log/glbc.log*: No such file or directory
W0916 07:29:57.265] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0916 07:29:57.266] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0916 07:29:57.266] scp: /var/log/fluentd.log*: No such file or directory
W0916 07:29:57.266] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:29:57.266] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:29:57.271] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0916 07:29:57.372] Dumping logs from nodes locally to '/workspace/_artifacts'
I0916 07:29:57.372] Detecting nodes in the cluster
I0916 07:30:41.622] Changing logfiles to be world-readable for download
I0916 07:30:41.667] Changing logfiles to be world-readable for download
I0916 07:30:41.916] Changing logfiles to be world-readable for download
I0916 07:30:42.100] Changing logfiles to be world-readable for download
... skipping 17 lines ...
W0916 07:30:48.643] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:48.643] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:48.643] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:48.643] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:48.643] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:48.643] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:48.648] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:49.030] scp: /var/log/kube-proxy.log*: No such file or directory
W0916 07:30:49.030] scp: /var/log/fluentd.log*: No such file or directory
W0916 07:30:49.031] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:49.031] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:49.032] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:49.032] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:49.032] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:49.033] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:49.036] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:49.096] scp: /var/log/kube-proxy.log*: No such file or directory
W0916 07:30:49.097] scp: /var/log/fluentd.log*: No such file or directory
W0916 07:30:49.099] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:49.099] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:49.099] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:49.100] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:49.101] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:49.101] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:49.103] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:49.354] scp: /var/log/kube-proxy.log*: No such file or directory
W0916 07:30:49.354] scp: /var/log/fluentd.log*: No such file or directory
W0916 07:30:49.354] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:49.354] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:49.355] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:49.355] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:49.355] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:49.355] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:49.358] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0916 07:30:51.708] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from e2e-82740-ac87c-minion-group-2rc1
I0916 07:30:51.876] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from e2e-82740-ac87c-minion-group-dzfs
I0916 07:30:51.896] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from e2e-82740-ac87c-minion-group-h6x6
W0916 07:30:52.824] 
W0916 07:30:52.824] Specify --start=39786 in the next get-serial-port-output invocation to get only the new output starting from here.
W0916 07:30:52.986] 
... skipping 5 lines ...
W0916 07:30:54.628] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:54.628] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:54.628] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:54.628] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:54.628] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:54.628] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:54.632] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:54.638] scp: /var/log/kube-proxy.log*: No such file or directory
W0916 07:30:54.638] scp: /var/log/fluentd.log*: No such file or directory
W0916 07:30:54.638] scp: /var/log/kube-proxy.log*: No such file or directory
W0916 07:30:54.639] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:54.639] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:54.639] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
... skipping 4 lines ...
W0916 07:30:54.640] scp: /var/log/node-problem-detector.log*: No such file or directory
W0916 07:30:54.640] scp: /var/log/kubelet.cov*: No such file or directory
W0916 07:30:54.640] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0916 07:30:54.641] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0916 07:30:54.641] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0916 07:30:54.641] scp: /var/log/startupscript.log*: No such file or directory
W0916 07:30:54.644] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:54.645] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0916 07:30:58.188] INSTANCE_GROUPS=e2e-82740-ac87c-minion-group
W0916 07:30:58.188] NODE_NAMES=e2e-82740-ac87c-minion-group-1p47 e2e-82740-ac87c-minion-group-2rc1 e2e-82740-ac87c-minion-group-c3l9 e2e-82740-ac87c-minion-group-dzfs e2e-82740-ac87c-minion-group-h6x6 e2e-82740-ac87c-minion-group-np74 e2e-82740-ac87c-minion-group-qv2p
I0916 07:30:59.087] Failures for e2e-82740-ac87c-minion-group (if any):
W0916 07:31:00.314] 2019/09/16 07:31:00 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m57.346641206s
W0916 07:31:00.315] 2019/09/16 07:31:00 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0916 07:31:00.370] Project: k8s-presubmit-scale
... skipping 25 lines ...
W0916 07:35:20.386] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-82740-ac87c-master-ip].
W0916 07:35:49.248] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-internal-master].
W0916 07:35:49.726] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-internal-node].
W0916 07:35:50.569] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-82740-ac87c-default-ssh].
I0916 07:35:51.570] Deleting firewall rules remaining in network e2e-82740-ac87c: 
I0916 07:35:52.464] Deleting custom subnet...
W0916 07:35:53.714] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0916 07:35:53.715]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-82740-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-kubemark-master'
W0916 07:35:53.715] 
W0916 07:36:02.894] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0916 07:36:02.894]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-82740-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-82740-ac87c-kubemark-master'
W0916 07:36:02.894] 
I0916 07:36:02.995] Failed to delete network 'e2e-82740-ac87c'. Listing firewall-rules:
W0916 07:36:03.946] 
W0916 07:36:03.947] To show all fields of the firewall, please show in JSON format: --format=json
W0916 07:36:03.947] To show all fields in table format, please see the examples in --help.
W0916 07:36:03.947] 
W0916 07:36:04.230] W0916 07:36:04.230449   85597 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:36:04.398] W0916 07:36:04.397692   85643 loader.go:223] Config not found: /workspace/.kube/config
... skipping 9 lines ...
I0916 07:36:04.910] Cleared config for k8s-presubmit-scale_e2e-82740-ac87c from /workspace/.kube/config
I0916 07:36:04.910] Done
W0916 07:36:04.938] W0916 07:36:04.903997   85784 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:36:04.938] W0916 07:36:04.904195   85784 loader.go:223] Config not found: /workspace/.kube/config
W0916 07:36:04.938] 2019/09/16 07:36:04 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 5m4.598243058s
W0916 07:36:04.939] 2019/09/16 07:36:04 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0916 07:36:04.939] 2019/09/16 07:36:04 main.go:319: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0916 07:36:04.939] Traceback (most recent call last):
W0916 07:36:04.939]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0916 07:36:04.939]     main(parse_args())
W0916 07:36:04.940]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0916 07:36:04.940]     mode.start(runner_args)
W0916 07:36:04.940]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0916 07:36:04.940]     check_env(env, self.command, *args)
W0916 07:36:04.940]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0916 07:36:04.940]     subprocess.check_call(cmd, env=env)
W0916 07:36:04.941]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0916 07:36:04.941]     raise CalledProcessError(retcode, cmd)
W0916 07:36:04.943] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-82740-ac87c', '--gcp-network=e2e-82740-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0916 07:36:04.943] Command failed
I0916 07:36:04.943] process 693 exited with code 1 after 28.1m
E0916 07:36:04.943] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0916 07:36:04.944] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0916 07:36:05.597] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0916 07:36:05.666] process 85795 exited with code 0 after 0.0m
I0916 07:36:05.666] Call:  gcloud config get-value account
I0916 07:36:06.010] process 85807 exited with code 0 after 0.0m
I0916 07:36:06.010] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0916 07:36:06.011] Upload result and artifacts...
I0916 07:36:06.011] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-kubemark-e2e-gce-big/1173493223612485632
I0916 07:36:06.012] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-kubemark-e2e-gce-big/1173493223612485632/artifacts
W0916 07:36:07.096] CommandException: One or more URLs matched no objects.
E0916 07:36:07.215] Command failed
I0916 07:36:07.216] process 85819 exited with code 1 after 0.0m
W0916 07:36:07.216] Remote dir gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-kubemark-e2e-gce-big/1173493223612485632/artifacts not exist yet
I0916 07:36:07.216] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82740/pull-kubernetes-kubemark-e2e-gce-big/1173493223612485632/artifacts
I0916 07:36:09.792] process 85961 exited with code 0 after 0.0m
I0916 07:36:09.793] Call:  git rev-parse HEAD
I0916 07:36:09.799] process 86640 exited with code 0 after 0.0m
... skipping 20 lines ...