This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 22 succeeded
Started2020-02-12 21:06
Elapsed1h35m
Revision
Buildergke-prow-default-pool-cf4891d4-4c7x
Refs master:6541758f
81678:cc32702e
88084:8ff6b24c
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/723ae426-b0d1-452c-864f-c340bddf3c1e/targets/test'}}
pod635d8ee2-4ddb-11ea-801c-6a256f4ae60f
resultstorehttps://source.cloud.google.com/results/invocations/723ae426-b0d1-452c-864f-c340bddf3c1e/targets/test
infra-commit1f7b8ac1d
job-versionv1.18.0-alpha.4.8+e084303b16524f
pod635d8ee2-4ddb-11ea-801c-6a256f4ae60f
repok8s.io/kubernetes
repo-commite084303b16524f6f6b06002abd9be49d7a9047e6
repos{u'k8s.io/kubernetes': u'master:6541758fd4d9fc375839a484a7e03c189b05ce3d,81678:cc32702e8fa1c20346e5c6d9c2d349d10fc23c3a,88084:8ff6b24c5736d7b2ce31319ae950a806f2325327', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.18.0-alpha.4.8+e084303b16524f

No Test Failures!


Show 22 Passed Tests

Error lines from build-log.txt

... skipping 1086 lines ...
W0212 21:46:39.767] Looking for address 'e2e-76a533c5ff-ac87c-master-ip'
W0212 21:46:41.231] Looking for address 'e2e-76a533c5ff-ac87c-master-internal-ip'
W0212 21:46:42.743] Using master: e2e-76a533c5ff-ac87c-master (external IP: 35.196.25.211; internal IP: 10.40.0.2)
I0212 21:46:42.843] Waiting up to 300 seconds for cluster initialization.
I0212 21:46:42.844] 
I0212 21:46:42.844]   This will continually check to see if the API for kubernetes is reachable.
I0212 21:46:42.844]   This may time out if there was some uncaught error during start up.
I0212 21:46:42.844] 
I0212 21:46:50.274] .Kubernetes cluster created.
I0212 21:46:50.696] Cluster "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c" set.
I0212 21:46:51.157] User "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c" set.
I0212 21:46:51.533] Context "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c" created.
I0212 21:46:51.873] Switched to context "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c".
... skipping 24 lines ...
I0212 21:47:50.966] e2e-76a533c5ff-ac87c-minion-group-fq4z   Ready                      <none>   21s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:47:50.966] e2e-76a533c5ff-ac87c-minion-group-qrll   Ready                      <none>   21s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:47:50.966] e2e-76a533c5ff-ac87c-minion-group-wb3c   Ready                      <none>   19s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:47:50.967] e2e-76a533c5ff-ac87c-minion-group-xhc7   Ready                      <none>   20s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:47:50.967] e2e-76a533c5ff-ac87c-minion-group-z6g8   Ready                      <none>   22s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:47:51.481] Validate output:
I0212 21:47:51.975] NAME                 STATUS    MESSAGE             ERROR
I0212 21:47:51.975] controller-manager   Healthy   ok                  
I0212 21:47:51.976] scheduler            Healthy   ok                  
I0212 21:47:51.976] etcd-1               Healthy   {"health":"true"}   
I0212 21:47:51.976] etcd-0               Healthy   {"health":"true"}   
I0212 21:47:51.987] Cluster validation succeeded
W0212 21:47:52.089] Done, listing cluster services:
... skipping 220 lines ...
W0212 21:54:00.402] Trying to find master named 'e2e-76a533c5ff-ac87c-kubemark-master'
W0212 21:54:00.402] Looking for address 'e2e-76a533c5ff-ac87c-kubemark-master-ip'
W0212 21:54:02.262] Looking for address 'e2e-76a533c5ff-ac87c-kubemark-master-internal-ip'
I0212 21:54:05.804] Waiting up to 300 seconds for cluster initialization.
I0212 21:54:05.805] 
I0212 21:54:05.805]   This will continually check to see if the API for kubernetes is reachable.
I0212 21:54:05.805]   This may time out if there was some uncaught error during start up.
I0212 21:54:05.805] 
W0212 21:54:05.905] Using master: e2e-76a533c5ff-ac87c-kubemark-master (external IP: 34.74.250.104; internal IP: 10.40.0.11)
I0212 21:54:06.322] Kubernetes cluster created.
I0212 21:54:07.601] Cluster "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark" set.
I0212 21:54:08.482] User "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark" set.
I0212 21:54:09.193] Context "k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark" created.
... skipping 16 lines ...
I0212 21:54:32.161] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I0212 21:54:32.167] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I0212 21:54:47.848] Found 1 node(s).
I0212 21:54:48.394] NAME                                   STATUS                     ROLES    AGE   VERSION
I0212 21:54:48.394] e2e-76a533c5ff-ac87c-kubemark-master   Ready,SchedulingDisabled   <none>   35s   v1.18.0-alpha.4.8+e084303b16524f
I0212 21:54:49.433] Validate output:
I0212 21:54:50.848] NAME                 STATUS    MESSAGE             ERROR
I0212 21:54:50.852] controller-manager   Healthy   ok                  
I0212 21:54:50.853] scheduler            Healthy   ok                  
I0212 21:54:50.855] etcd-1               Healthy   {"health":"true"}   
I0212 21:54:50.858] etcd-0               Healthy   {"health":"true"}   
I0212 21:54:50.885] Cluster validation succeeded
W0212 21:54:50.989] Done, listing cluster services:
... skipping 650 lines ...
W0212 22:00:10.298] I0212 22:00:10.298200   96838 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0212 22:00:10.338] I0212 22:00:10.337704   96838 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0212 22:00:10.493] I0212 22:00:10.492931   96838 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0212 22:00:10.535] I0212 22:00:10.534840   96838 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0212 22:00:10.575] I0212 22:00:10.575053   96838 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0212 22:00:10.619] I0212 22:00:10.619003   96838 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0212 22:00:40.661] W0212 22:00:40.661320   96838 util.go:63] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090), response: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12]\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1a3no endpoints available for service \"prometheus-k8s\"\"\x12ServiceUnavailable0\xf7\x03\x1a\x00\"\x00"
W0212 22:01:10.671] I0212 22:01:10.670695   96838 util.go:92] 7/8 targets are ready, example not ready target: {map[endpoint:http instance:10.64.7.74:8080 job:prometheus-operator namespace:monitoring pod:prometheus-operator-778f7b745b-xbd7w service:prometheus-operator] unknown}
W0212 22:01:40.668] I0212 22:01:40.668006   96838 util.go:95] All 8 expected targets are ready
W0212 22:01:40.715] I0212 22:01:40.715109   96838 util.go:95] All 1 expected targets are ready
W0212 22:01:40.715] I0212 22:01:40.715159   96838 prometheus.go:166] Prometheus stack set up successfully
W0212 22:01:40.715] W0212 22:01:40.715577   96838 imagepreload.go:85] No images specified. Skipping image preloading
W0212 22:01:40.716] I0212 22:01:40.715589   96838 clusterloader.go:178] --------------------------------------------------------------------------------
... skipping 6377 lines ...
W0212 22:23:50.807] I0212 22:23:50.807536   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-3), labelSelector(name=small-deployment-282): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.067] I0212 22:23:51.067327   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=small-deployment-67): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.348] I0212 22:23:51.348364   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=medium-deployment-13): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 22 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.351] I0212 22:23:51.350890   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=small-deployment-74): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 1 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.354] I0212 22:23:51.353733   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-1), labelSelector(name=small-deployment-200): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 7 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.409] I0212 22:23:51.409536   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-2), labelSelector(name=small-deployment-208): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 3 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.446] 2020/02/12 22:23:51 main.go:755: [Boskos] Update of k8s-presubmit-scale-4 failed with status 404 Not Found, status code 404 updating k8s-presubmit-scale-4
W0212 22:23:51.457] I0212 22:23:51.456681   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-2), labelSelector(group=load,name=medium-statefulset-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 5 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.465] I0212 22:23:51.464822   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=small-deployment-209): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.467] I0212 22:23:51.467583   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=small-deployment-37): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.677] I0212 22:23:51.677006   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-5), labelSelector(name=small-deployment-5): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.759] I0212 22:23:51.758783   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-2), labelSelector(name=small-deployment-130): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 5 terminating, 0 unknown, 0 runningButNotReady 
W0212 22:23:51.778] I0212 22:23:51.778179   96838 wait_for_pods.go:92] WaitForControlledPodsRunning: namespace(test-mfzefw-1), labelSelector(name=small-deployment-272): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 4012 lines ...
W0212 22:29:11.075] Specify --start=47822 in the next get-serial-port-output invocation to get only the new output starting from here.
W0212 22:29:15.390] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0212 22:29:15.462] scp: /var/log/konnectivity-server.log*: No such file or directory
W0212 22:29:15.463] scp: /var/log/fluentd.log*: No such file or directory
W0212 22:29:15.463] scp: /var/log/kubelet.cov*: No such file or directory
W0212 22:29:15.463] scp: /var/log/startupscript.log*: No such file or directory
W0212 22:29:15.471] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0212 22:29:15.576] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-kubemark-e2e-gce-big/1227700389348904960/artifacts' using logexporter
I0212 22:29:15.576] Detecting nodes in the cluster
I0212 22:29:20.569] namespace/logexporter created
I0212 22:29:20.610] secret/google-service-account created
I0212 22:29:20.648] daemonset.apps/logexporter created
W0212 22:29:21.869] CommandException: One or more URLs matched no objects.
W0212 22:29:25.481] 2020/02/12 22:29:25 main.go:755: [Boskos] Update of k8s-presubmit-scale-4 failed with Post http://boskos.test-pods.svc.cluster.local./update?name=k8s-presubmit-scale-4&owner=pull-kubernetes-kubemark-e2e-gce-big&state=busy: dial tcp 10.63.250.132:80: connect: connection refused
W0212 22:29:38.235] CommandException: One or more URLs matched no objects.
I0212 22:29:54.885] Successfully listed marker files for successful nodes
W0212 22:29:55.498] scp: /var/log/glbc.log*: No such file or directory
W0212 22:29:55.498] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0212 22:29:55.567] scp: /var/log/konnectivity-server.log*: No such file or directory
W0212 22:29:55.567] scp: /var/log/fluentd.log*: No such file or directory
W0212 22:29:55.568] scp: /var/log/kubelet.cov*: No such file or directory
W0212 22:29:55.569] scp: /var/log/startupscript.log*: No such file or directory
W0212 22:29:55.574] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0212 22:29:55.685] Skipping dumping of node logs
W0212 22:29:55.786] 2020/02/12 22:29:55 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m28.215813412s
W0212 22:29:55.786] 2020/02/12 22:29:55 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0212 22:30:11.226] Successfully listed marker files for successful nodes
I0212 22:30:11.570] Fetching logs from logexporter-4b5dw running on e2e-76a533c5ff-ac87c-minion-group-qrll
I0212 22:30:11.573] Fetching logs from logexporter-677n6 running on e2e-76a533c5ff-ac87c-minion-group-z6g8
... skipping 40 lines ...
W0212 22:34:04.495] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/instanceTemplates/e2e-76a533c5ff-ac87c-windows-node-template].
I0212 22:34:09.558] Successfully executed 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep e2e-76a533c5ff-ac87c-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on e2e-76a533c5ff-ac87c-master
I0212 22:34:09.559] Removing etcd replica, name: e2e-76a533c5ff-ac87c-master, port: 2379, result: 0
I0212 22:34:11.354] Successfully executed 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep e2e-76a533c5ff-ac87c-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on e2e-76a533c5ff-ac87c-master
I0212 22:34:11.354] Removing etcd replica, name: e2e-76a533c5ff-ac87c-master, port: 4002, result: 0
W0212 22:34:17.715] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/zones/us-east1-b/instances/e2e-76a533c5ff-ac87c-master].
W0212 22:34:25.482] 2020/02/12 22:34:25 main.go:755: [Boskos] Update of k8s-presubmit-scale-4 failed with Post http://boskos.test-pods.svc.cluster.local./update?name=k8s-presubmit-scale-4&owner=pull-kubernetes-kubemark-e2e-gce-big&state=busy: dial tcp 10.63.250.132:80: connect: connection refused
W0212 22:34:30.893] Project: k8s-presubmit-scale-4
W0212 22:34:30.893] Network Project: k8s-presubmit-scale-4
W0212 22:34:30.894] Zone: us-east1-b
I0212 22:34:30.994] Shutting down test cluster in background.
W0212 22:34:39.948] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-minion-http-alt].
W0212 22:34:45.011] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-minion-nodeports].
... skipping 28 lines ...
W0212 22:37:29.869] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/zones/us-east1-b/instances/e2e-76a533c5ff-ac87c-kubemark-master].
I0212 22:37:30.686] Deleting firewall rules remaining in network e2e-76a533c5ff-ac87c: e2e-76a533c5ff-ac87c-kubemark-default-internal-master
I0212 22:37:30.687] e2e-76a533c5ff-ac87c-kubemark-default-internal-node
I0212 22:37:30.687] e2e-76a533c5ff-ac87c-kubemark-master-etcd
I0212 22:37:30.687] e2e-76a533c5ff-ac87c-kubemark-master-https
I0212 22:37:30.688] e2e-76a533c5ff-ac87c-kubemark-minion-all
W0212 22:37:36.851] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0212 22:37:36.852]  - The resource 'projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-master-etcd' is not ready
W0212 22:37:36.852] 
W0212 22:37:37.059] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0212 22:37:37.060]  - The resource 'projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-master-https' is not ready
W0212 22:37:37.060] 
W0212 22:37:38.449] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0212 22:37:38.450]  - The resource 'projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-minion-all' is not ready
W0212 22:37:38.450] 
W0212 22:37:40.181] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-default-internal-master].
W0212 22:37:42.419] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-master-https].
W0212 22:37:42.873] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-master-etcd].
W0212 22:37:44.979] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-minion-all].
W0212 22:37:45.666] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/global/firewalls/e2e-76a533c5ff-ac87c-kubemark-default-internal-node].
W0212 22:37:45.793] Failed to delete firewall rules.
I0212 22:37:46.904] Deleting custom subnet...
W0212 22:37:48.304] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0212 22:37:48.305]  - The subnetwork resource 'projects/k8s-presubmit-scale-4/regions/us-east1/subnetworks/e2e-76a533c5ff-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale-4/regions/us-east1/addresses/e2e-76a533c5ff-ac87c-kubemark-master-internal-ip'
W0212 22:37:48.305] 
W0212 22:37:51.726] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale-4/regions/us-east1/addresses/e2e-76a533c5ff-ac87c-kubemark-master-ip].
W0212 22:37:54.644] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0212 22:37:54.645]  - The network resource 'projects/k8s-presubmit-scale-4/global/networks/e2e-76a533c5ff-ac87c' is already being used by 'projects/k8s-presubmit-scale-4/regions/us-east1/subnetworks/e2e-76a533c5ff-ac87c-custom-subnet'
W0212 22:37:54.645] 
I0212 22:37:54.746] Failed to delete network 'e2e-76a533c5ff-ac87c'. Listing firewall-rules:
W0212 22:37:56.937] 
W0212 22:37:56.937] To show all fields of the firewall, please show in JSON format: --format=json
W0212 22:37:56.937] To show all fields in table format, please see the examples in --help.
W0212 22:37:56.937] 
W0212 22:37:57.242] W0212 22:37:57.242243  100664 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0212 22:37:57.444] W0212 22:37:57.444320  100705 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I0212 22:38:10.514] Property "users.k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark-basic-auth" unset.
I0212 22:38:10.733] Property "contexts.k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark" unset.
I0212 22:38:10.740] Cleared config for k8s-presubmit-scale-4_e2e-76a533c5ff-ac87c-kubemark from /workspace/.kube/config
I0212 22:38:10.741] Done
W0212 22:38:10.842] 2020/02/12 22:38:10 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 8m15.062611843s
W0212 22:38:10.842] 2020/02/12 22:38:10 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0212 22:38:44.809] 2020/02/12 22:38:44 main.go:311: [Boskos] Fail To Release: 1 error occurred:
W0212 22:38:44.810] 	* Post http://boskos.test-pods.svc.cluster.local./release?dest=dirty&name=k8s-presubmit-scale-4&owner=pull-kubernetes-kubemark-e2e-gce-big: dial tcp 10.63.250.132:80: connect: connection refused
W0212 22:38:44.810] 
W0212 22:38:44.810] , kubetest err: <nil>
W0212 22:38:44.815] Traceback (most recent call last):
W0212 22:38:44.815]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 779, in <module>
W0212 22:38:44.817]     main(parse_args())
... skipping 3 lines ...
W0212 22:38:44.818]     check_env(env, self.command, *args)
W0212 22:38:44.818]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0212 22:38:44.818]     subprocess.check_call(cmd, env=env)
W0212 22:38:44.818]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0212 22:38:44.819]     raise CalledProcessError(retcode, cmd)
W0212 22:38:44.820] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-76a533c5ff-ac87c', '--gcp-network=e2e-76a533c5ff-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project-type=scalability-presubmit-project', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m', '--logexporter-gcs-path=gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-kubemark-e2e-gce-big/1227700389348904960/artifacts')' returned non-zero exit status 1
E0212 22:38:44.833] Command failed
I0212 22:38:44.833] process 791 exited with code 1 after 89.9m
E0212 22:38:44.834] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0212 22:38:44.834] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0212 22:38:45.486] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0212 22:38:45.570] process 101161 exited with code 0 after 0.0m
I0212 22:38:45.571] Call:  gcloud config get-value account
I0212 22:38:46.131] process 101174 exited with code 0 after 0.0m
I0212 22:38:46.132] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 29 lines ...