This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2020-06-27 17:53
Elapsed4h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/60bcd5b2-9470-4947-bf86-dd9aa0460b73/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/60bcd5b2-9470-4947-bf86-dd9aa0460b73/targets/test

Test Failures


ClusterLoaderV2 testing/windows-tests/config.yaml 3h37m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\stesting\/windows\-tests\/config\.yaml$'
:0
[measurement call WaitForControlledPodsRunning - WaitForRunningLatencyRCs error: 21 objects timed out: ReplicationControllers: test-v7nlhs-58/latency-pod-rc-0, test-v7nlhs-59/latency-pod-rc-0, test-v7nlhs-61/latency-pod-rc-0, test-v7nlhs-74/latency-pod-rc-0, test-v7nlhs-73/latency-pod-rc-0, test-v7nlhs-63/latency-pod-rc-0, test-v7nlhs-71/latency-pod-rc-0, test-v7nlhs-60/latency-pod-rc-0, test-v7nlhs-79/latency-pod-rc-0, test-v7nlhs-66/latency-pod-rc-0, test-v7nlhs-75/latency-pod-rc-0, test-v7nlhs-76/latency-pod-rc-0, test-v7nlhs-65/latency-pod-rc-0, test-v7nlhs-72/latency-pod-rc-0, test-v7nlhs-67/latency-pod-rc-0, test-v7nlhs-62/latency-pod-rc-0, test-v7nlhs-70/latency-pod-rc-0, test-v7nlhs-69/latency-pod-rc-0, test-v7nlhs-57/latency-pod-rc-0, test-v7nlhs-68/latency-pod-rc-0, test-v7nlhs-80/latency-pod-rc-0
measurement call PodStartupLatency - PodStartupLatency error: pod startup: too high latency 90th percentile: got 1h47m19.685100062s expected: 1h0m0s]
:0
				from junit.xml

Find startup mentions in log files | View test history on testgrid


Error lines from build-log.txt

... skipping 362 lines ...
Trying to find master named 'e2e-567298e4a1-42aea-master'
Looking for address 'e2e-567298e4a1-42aea-master-ip'
Using master: e2e-567298e4a1-42aea-master (external IP: 34.105.80.58; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

..........Kubernetes cluster created.
Cluster "k8s-gce-slow-1-5_e2e-567298e4a1-42aea" set.
User "k8s-gce-slow-1-5_e2e-567298e4a1-42aea" set.
Context "k8s-gce-slow-1-5_e2e-567298e4a1-42aea" created.
Switched to context "k8s-gce-slow-1-5_e2e-567298e4a1-42aea".
... skipping 31 lines ...
Found 3 node(s).
NAME                                           STATUS                     ROLES    AGE     VERSION
e2e-567298e4a1-42aea-master                    Ready,SchedulingDisabled   <none>   3m39s   v1.19.0-beta.2.494+fe4fae034311dd
e2e-567298e4a1-42aea-minion-group-1ngk         Ready                      <none>   3m29s   v1.19.0-beta.2.494+fe4fae034311dd
e2e-567298e4a1-42aea-windows-node-group-l33z   Ready                      <none>   24s     v1.19.0-beta.2.494+fe4fae034311dd
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 287 lines ...
I0627 18:10:09.102651   15694 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/default/prometheus-serviceMonitorLegacyKubeDNS.yaml
I0627 18:10:09.139292   15694 prometheus.go:238] Exposing kube-apiserver metrics in the cluster
I0627 18:10:09.282863   15694 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
I0627 18:10:09.320612   15694 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
I0627 18:10:09.356863   15694 framework.go:239] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
I0627 18:10:09.394800   15694 prometheus.go:317] Waiting for Prometheus stack to become healthy...
W0627 18:10:39.436100   15694 util.go:64] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090), response: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12g\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1a=no endpoints available for service \"http:prometheus-k8s:9090\"\"\x12ServiceUnavailable0\xf7\x03\x1a\x00\"\x00"
I0627 18:11:09.447496   15694 util.go:93] 11/12 targets are ready, example not ready target: {map[instance:e2e-567298e4a1-42aea-windows-node-group-l33z:5000 job:monitoring/windows_static] unknown}
I0627 18:11:39.443717   15694 util.go:96] All 12 expected targets are ready
I0627 18:11:39.443764   15694 prometheus.go:196] Prometheus stack set up successfully
W0627 18:11:39.443961   15694 imagepreload.go:86] No images specified. Skipping image preloading
I0627 18:11:39.443983   15694 clusterloader.go:211] --------------------------------------------------------------------------------
I0627 18:11:39.443989   15694 clusterloader.go:212] Running testing/windows-tests/config.yaml
... skipping 45203 lines ...
I0627 21:48:28.122933   15694 phase_latency.go:135] PodStartupLatency: perc50: 2.74157016s, perc90: 2m59.471311148s, perc99: 14m5.750721804s
I0627 21:48:28.122993   15694 phase_latency.go:130] PodStartupLatency: 0 worst schedule_to_watch latencies: []
I0627 21:48:28.123000   15694 phase_latency.go:135] PodStartupLatency: perc50: 0s, perc90: 0s, perc99: 0s
I0627 21:48:28.123054   15694 phase_latency.go:130] PodStartupLatency: 71 worst pod_startup latencies: [{test-v7nlhs-8/latency-pod-rc-0-g5bml 4.903951334s} {test-v7nlhs-12/latency-pod-rc-0-rvtgk 4.96576877s} {test-v7nlhs-14/latency-pod-rc-0-b5c99 4.967005331s} {test-v7nlhs-24/latency-pod-rc-0-jvpcw 5.164953352s} {test-v7nlhs-2/latency-pod-rc-0-s82j9 5.167031867s} {test-v7nlhs-33/latency-pod-rc-0-jzncl 5.222095045s} {test-v7nlhs-29/latency-pod-rc-0-f5th7 5.303081098s} {test-v7nlhs-20/latency-pod-rc-0-cjxvw 5.36707376s} {test-v7nlhs-17/latency-pod-rc-0-vjbpp 5.496584472s} {test-v7nlhs-47/latency-pod-rc-0-wh2x5 5.501757962s} {test-v7nlhs-35/latency-pod-rc-0-9b5cm 5.520563979s} {test-v7nlhs-25/latency-pod-rc-0-f4qbj 5.540710776s} {test-v7nlhs-13/latency-pod-rc-0-tkr54 5.586792954s} {test-v7nlhs-3/latency-pod-rc-0-vfdwn 5.58837092s} {test-v7nlhs-53/latency-pod-rc-0-22zts 5.608811448s} {test-v7nlhs-50/latency-pod-rc-0-w222l 5.632436464s} {test-v7nlhs-27/latency-pod-rc-0-77trq 5.635346492s} {test-v7nlhs-11/latency-pod-rc-0-4887t 5.737987277s} {test-v7nlhs-9/latency-pod-rc-0-s74rn 5.74157016s} {test-v7nlhs-51/latency-pod-rc-0-qj4h6 5.965577277s} {test-v7nlhs-5/latency-pod-rc-0-7v6f2 6.026291982s} {test-v7nlhs-30/latency-pod-rc-0-f6mk7 6.032095711s} {test-v7nlhs-31/latency-pod-rc-0-rtvw9 6.069618637s} {test-v7nlhs-10/latency-pod-rc-0-4kcjz 6.083110561s} {test-v7nlhs-28/latency-pod-rc-0-hz8jt 6.119062589s} {test-v7nlhs-1/latency-pod-rc-0-c7tnz 6.141722453s} {test-v7nlhs-46/latency-pod-rc-0-zvkz4 6.142092767s} {test-v7nlhs-41/latency-pod-rc-0-gk4lh 6.206613598s} {test-v7nlhs-52/latency-pod-rc-0-x8glv 6.257554204s} {test-v7nlhs-6/latency-pod-rc-0-f85wc 6.262229558s} {test-v7nlhs-23/latency-pod-rc-0-tthq5 6.271253502s} {test-v7nlhs-15/latency-pod-rc-0-p9wqp 6.279429447s} {test-v7nlhs-55/latency-pod-rc-0-fdtxg 6.290801191s} {test-v7nlhs-4/latency-pod-rc-0-kvfpk 6.30470632s} {test-v7nlhs-36/latency-pod-rc-0-ltzm9 6.316121767s} {test-v7nlhs-21/latency-pod-rc-0-pc7dz 6.335645955s} {test-v7nlhs-7/latency-pod-rc-0-2dsn7 6.364614957s} {test-v7nlhs-19/latency-pod-rc-0-42w2s 6.382010898s} {test-v7nlhs-22/latency-pod-rc-0-5k9dk 6.384987527s} {test-v7nlhs-48/latency-pod-rc-0-rdw2d 6.38594839s} {test-v7nlhs-54/latency-pod-rc-0-mzs6f 6.434736275s} {test-v7nlhs-16/latency-pod-rc-0-xpm8g 6.496837158s} {test-v7nlhs-38/latency-pod-rc-0-rhcwc 6.502351701s} {test-v7nlhs-26/latency-pod-rc-0-m9sxb 6.605485318s} {test-v7nlhs-18/latency-pod-rc-0-9jvx9 6.630038866s} {test-v7nlhs-49/latency-pod-rc-0-zpvxj 6.79956578s} {test-v7nlhs-34/latency-pod-rc-0-hp8lw 6.877085595s} {test-v7nlhs-32/latency-pod-rc-0-45d95 6.925628071s} {test-v7nlhs-37/latency-pod-rc-0-ghrcw 7.030549826s} {test-v7nlhs-39/latency-pod-rc-0-fwrd6 7.036544507s} {test-v7nlhs-42/latency-pod-rc-0-dpjlk 7.122851886s} {test-v7nlhs-44/latency-pod-rc-0-r867n 7.23757163s} {test-v7nlhs-56/latency-pod-rc-0-h58zk 7.342620948s} {test-v7nlhs-45/latency-pod-rc-0-cdq94 7.609600369s} {test-v7nlhs-43/latency-pod-rc-0-vhvmn 7.614398018s} {test-v7nlhs-40/latency-pod-rc-0-gpg6x 7.698439655s} {test-v7nlhs-77/latency-pod-rc-0-cd82j 1h16m55.311183825s} {test-v7nlhs-78/latency-pod-rc-0-qnjlv 1h26m46.903221353s} {test-v7nlhs-64/latency-pod-rc-0-xtnrz 1h28m49.224994796s} {test-v7nlhs-74/latency-pod-rc-0-7h8jr 1h32m34.297097283s} {test-v7nlhs-80/latency-pod-rc-0-mbw7k 1h33m21.337548033s} {test-v7nlhs-58/latency-pod-rc-0-ff8k6 1h43m33.19383595s} {test-v7nlhs-69/latency-pod-rc-0-v65vm 1h44m7.193542977s} {test-v7nlhs-67/latency-pod-rc-0-cqtvv 1h47m19.685100062s} {test-v7nlhs-65/latency-pod-rc-0-q6fc2 1h48m25.449383122s} {test-v7nlhs-62/latency-pod-rc-0-wsfwm 1h50m7.471311148s} {test-v7nlhs-57/latency-pod-rc-0-9z279 1h54m54.750721804s} {test-v7nlhs-68/latency-pod-rc-0-9sdxd 2h3m55.091740848s} {test-v7nlhs-76/latency-pod-rc-0-86bvg 2h14m33.419898253s} {test-v7nlhs-75/latency-pod-rc-0-5kk55 2h15m6.69264415s} {test-v7nlhs-60/latency-pod-rc-0-j668v 2h23m26.18368776s}]
I0627 21:48:28.123206   15694 phase_latency.go:135] PodStartupLatency: perc50: 6.335645955s, perc90: 1h47m19.685100062s, perc99: 2h23m26.18368776s; threshold 1h0m0s
E0627 21:48:28.123223   15694 pod_startup_latency.go:168] PodStartupLatency: labelSelector(group = latency): pod startup: too high latency 90th percentile: got 1h47m19.685100062s expected: 1h0m0s
W0627 21:48:28.123773   15694 simple_test_executor.go:165] Got errors during step execution: [measurement call PodStartupLatency - PodStartupLatency error: pod startup: too high latency 90th percentile: got 1h47m19.685100062s expected: 1h0m0s]
I0627 21:48:28.124086   15694 prometheus_measurement.go:79] APIResponsivenessPrometheusSimple gathering results
I0627 21:48:28.124353   15694 prometheus.go:109] Executing "histogram_quantile(0.50, sum(rate(apiserver_request_duration_seconds_bucket{verb!~\"WATCH|WATCHLIST|PROXY|CONNECT\"}[216m])) by (resource,  subresource, verb, scope, le))" at 2020-06-27T21:48:28Z
I0627 21:48:29.820371   15694 prometheus.go:109] Executing "histogram_quantile(0.90, sum(rate(apiserver_request_duration_seconds_bucket{verb!~\"WATCH|WATCHLIST|PROXY|CONNECT\"}[216m])) by (resource,  subresource, verb, scope, le))" at 2020-06-27T21:48:28Z
I0627 21:48:31.448115   15694 prometheus.go:109] Executing "histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb!~\"WATCH|WATCHLIST|PROXY|CONNECT\"}[216m])) by (resource,  subresource, verb, scope, le))" at 2020-06-27T21:48:28Z
I0627 21:48:33.006597   15694 prometheus.go:109] Executing "sum(increase(apiserver_request_duration_seconds_count{verb!~\"WATCH|WATCHLIST|PROXY|CONNECT\"}[216m])) by (resource, subresource, scope, verb)" at 2020-06-27T21:48:28Z
I0627 21:48:33.094573   15694 api_responsiveness_prometheus.go:273] APIResponsivenessPrometheusSimple: Top latency metric: {Resource:subjectaccessreviews Subresource: Verb:POST Scope:cluster Latency:perc50: 25ms, perc90: 45ms, perc99: 49.5ms Count:1142 SlowCount:0}; threshold: 1s
... skipping 6 lines ...
I0627 21:48:33.095187   15694 simple_test_executor.go:94] 
I0627 21:48:33.095392   15694 reflector.go:181] Stopping reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125
I0627 21:48:48.340759   15694 simple_test_executor.go:384] Resources cleanup time: 15.245480454s
E0627 21:48:48.340811   15694 clusterloader.go:221] --------------------------------------------------------------------------------
E0627 21:48:48.340817   15694 clusterloader.go:222] Test Finished
E0627 21:48:48.340822   15694 clusterloader.go:223]   Test: testing/windows-tests/config.yaml
E0627 21:48:48.340827   15694 clusterloader.go:224]   Status: Fail
E0627 21:48:48.340833   15694 clusterloader.go:226]   Errors: [measurement call WaitForControlledPodsRunning - WaitForRunningLatencyRCs error: 21 objects timed out: ReplicationControllers: test-v7nlhs-58/latency-pod-rc-0, test-v7nlhs-59/latency-pod-rc-0, test-v7nlhs-61/latency-pod-rc-0, test-v7nlhs-74/latency-pod-rc-0, test-v7nlhs-73/latency-pod-rc-0, test-v7nlhs-63/latency-pod-rc-0, test-v7nlhs-71/latency-pod-rc-0, test-v7nlhs-60/latency-pod-rc-0, test-v7nlhs-79/latency-pod-rc-0, test-v7nlhs-66/latency-pod-rc-0, test-v7nlhs-75/latency-pod-rc-0, test-v7nlhs-76/latency-pod-rc-0, test-v7nlhs-65/latency-pod-rc-0, test-v7nlhs-72/latency-pod-rc-0, test-v7nlhs-67/latency-pod-rc-0, test-v7nlhs-62/latency-pod-rc-0, test-v7nlhs-70/latency-pod-rc-0, test-v7nlhs-69/latency-pod-rc-0, test-v7nlhs-57/latency-pod-rc-0, test-v7nlhs-68/latency-pod-rc-0, test-v7nlhs-80/latency-pod-rc-0
measurement call PodStartupLatency - PodStartupLatency error: pod startup: too high latency 90th percentile: got 1h47m19.685100062s expected: 1h0m0s]
E0627 21:48:48.340845   15694 clusterloader.go:228] --------------------------------------------------------------------------------

JUnit report was created: /logs/artifacts/junit.xml
I0627 21:48:48.341900   15694 prometheus.go:209] Tearing down prometheus stack
F0627 21:49:03.451713   15694 clusterloader.go:331] 1 tests have failed!
2020/06/27 21:49:03 process.go:155: Step '/home/prow/go/src/k8s.io/windows-testing/gce/load-test.sh cluster-loader2 --nodes=1 --provider=gce --report-dir=/logs/artifacts --enable-prometheus-server=true --testconfig=testing/windows-tests/config.yaml --testoverrides=./testing/windows-tests/windows_override.yaml' finished in 3h46m8.140010619s
2020/06/27 21:49:03 e2e.go:534: Dumping logs locally to: /logs/artifacts
2020/06/27 21:49:03 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
... skipping 10 lines ...
Specify --start=45158 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov  startupscript.log' from e2e-567298e4a1-42aea-minion-group-1ngk

Specify --start=48243 in the next get-serial-port-output invocation to get only the new output starting from here.
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].

Specify --start=181534 in the next get-serial-port-output invocation to get only the new output starting from here.
INSTANCE_GROUPS=e2e-567298e4a1-42aea-minion-group
NODE_NAMES=e2e-567298e4a1-42aea-minion-group-1ngk
Failures for e2e-567298e4a1-42aea-minion-group (if any):
Failures for e2e-567298e4a1-42aea-windows-node-group (if any):
... skipping 26 lines ...
NODE_NAMES=e2e-567298e4a1-42aea-minion-group-1ngk
Bringing down cluster
Deleting Managed Instance Group...
Deleting Managed Instance Group...
...Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-slow-1-5/zones/us-west1-b/instanceGroupManagers/e2e-567298e4a1-42aea-windows-node-group].
done.
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2020-06-27T21:53:28Z"}
.{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2020-06-27T21:53:43Z"}